file_id
stringlengths 5
9
| content
stringlengths 100
5.25M
| local_path
stringlengths 66
70
| kaggle_dataset_name
stringlengths 3
50
⌀ | kaggle_dataset_owner
stringlengths 3
20
⌀ | kversion
stringlengths 497
763
⌀ | kversion_datasetsources
stringlengths 71
5.46k
⌀ | dataset_versions
stringlengths 338
235k
⌀ | datasets
stringlengths 334
371
⌀ | users
stringlengths 111
264
⌀ | script
stringlengths 100
5.25M
| df_info
stringlengths 0
4.87M
| has_data_info
bool 2
classes | nb_filenames
int64 0
370
| retreived_data_description
stringlengths 0
4.44M
| script_nb_tokens
int64 25
663k
| upvotes
int64 0
1.65k
| tokens_description
int64 25
663k
| tokens_script
int64 25
663k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
129073232
|
<jupyter_start><jupyter_text>CIFAKE: Real and AI-Generated Synthetic Images
# CIFAKE: Real and AI-Generated Synthetic Images
The quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.
CIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?
Further information on this dataset can be found here: [Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)

## Dataset details
The dataset contains two classes - REAL and FAKE.
For REAL, we collected the images from Krizhevsky & Hinton's [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)
For the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4
There are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)
## Papers with Code
The dataset and all studies using it are linked using [Papers with Code](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)
[https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)
## References
If you use this dataset, you **must** cite the following sources
[Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdfl)
[Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)
Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2023). The Bird & Lotfi study is a preprint currently available on [ArXiv](https://arxiv.org/abs/2303.14126) and this description will be updated when the paper is published.
## Notes
The updates to the dataset on the 28th of March 2023 did not change anything; the file formats ".jpeg" were renamed ".jpg" and the root folder was uploaded to meet Kaggle's usability requirements.
## License
This dataset is published under the [same MIT license as CIFAR-10](https://github.com/wichtounet/cifar-10/blob/master/LICENSE):
*Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:*
*The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.*
*THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*
Kaggle dataset identifier: cifake-real-and-ai-generated-synthetic-images
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import random
import cv2
import os
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.utils import to_categorical, plot_model
IMG_DIMS = (96, 96, 3)
NUM_CLASSES = 2
BATCH_SIZE = 64
EPOCHS = 80
LEARNING_RATE = 1e-3
data = []
labels = []
images_path = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/train"
fake_path = os.path.join(images_path, "FAKE")
real_path = os.path.join(images_path, "REAL")
fake_files = [
os.path.join(fake_path, f)
for f in os.listdir(fake_path)
if os.path.isfile(os.path.join(fake_path, f))
]
real_files = [
os.path.join(real_path, f)
for f in os.listdir(real_path)
if os.path.isfile(os.path.join(real_path, f))
]
image_files = fake_files + real_files
random.shuffle(image_files)
counter = 0
for img in image_files:
if counter >= 12000:
break
image = cv2.imread(img)
image = cv2.resize(image, (IMG_DIMS[0], IMG_DIMS[1]))
image = img_to_array(image)
data.append(image)
label = img.split(os.path.sep)[-2]
if label == "REAL":
label = 1
else:
label = 0
counter += 1
labels.append([label])
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
(trainX, testX, trainY, testY) = train_test_split(
data, labels, test_size=0.3, random_state=101
)
trainY = to_categorical(trainY, num_classes=2)
testY = to_categorical(testY, num_classes=2)
def build_model():
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(32, (3, 3), padding="same", input_shape=IMG_DIMS),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(3, 3)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding="same"),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
# tf.keras.layers.Conv2D(64, (3, 3), padding="same"),
# tf.keras.layers.Activation("relu"),
# tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(128, (3, 3), padding="same"),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(128, (3, 3), padding="same"),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(NUM_CLASSES),
tf.keras.layers.Activation("softmax"),
]
)
return model
from tensorflow.keras.losses import CategoricalCrossentropy
cce_loss = CategoricalCrossentropy()
model = build_model()
model.compile(optimizer="adam", loss=cce_loss, metrics=["accuracy"])
model.fit(
trainX,
trainY,
batch_size=64,
validation_data=(testX, testY),
steps_per_epoch=len(trainX) // BATCH_SIZE,
epochs=EPOCHS,
verbose=1,
)
import pandas as pd
loss = pd.DataFrame(model.history.history)
loss.plot()
from sklearn.metrics import classification_report
predictions = model.predict(testX)
predicted_labels = np.argmax(predictions, axis=1)
true_labels = np.argmax(testY, axis=1)
report = classification_report(true_labels, predicted_labels)
print(report)
datas = []
labelss = []
images_paths = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/test"
fake_paths = os.path.join(images_paths, "FAKE")
real_paths = os.path.join(images_paths, "REAL")
fake_file = [
os.path.join(fake_paths, f)
for f in os.listdir(fake_paths)
if os.path.isfile(os.path.join(fake_paths, f))
]
real_file = [
os.path.join(real_paths, f)
for f in os.listdir(real_paths)
if os.path.isfile(os.path.join(real_paths, f))
]
image_file = fake_file + real_file
random.shuffle(image_file)
counter = 0
for img in image_files:
if counter >= 1000:
break
image = cv2.imread(img)
image = cv2.resize(image, (IMG_DIMS[0], IMG_DIMS[1]))
image = img_to_array(image)
datas.append(image)
label = img.split(os.path.sep)[-2]
if label == "REAL":
label = 1
else:
label = 0
counter += 1
labelss.append([label])
datas = np.array(datas, dtype="float") / 255.0
labelss = np.array(labelss)
# labelss = to_categorical(labelss, num_classes=2)
(TrainX, TestX, TrainY, TestY) = train_test_split(
datas, labelss, test_size=0.01, random_state=101
)
TrainY = to_categorical(TrainY, num_classes=2)
from sklearn.metrics import classification_report
pred = model.predict(TrainX)
pred_label = np.argmax(pred, axis=1)
true_label = np.argmax(TrainY, axis=1)
reports = classification_report(true_label, pred_label)
print(reports)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/073/129073232.ipynb
|
cifake-real-and-ai-generated-synthetic-images
|
birdy654
|
[{"Id": 129073232, "ScriptId": 38368072, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12633395, "CreationDate": "05/10/2023 19:09:23", "VersionNumber": 1.0, "Title": "notebookb0d7290cbe", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 195.0, "LinesInsertedFromPrevious": 195.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184805334, "KernelVersionId": 129073232, "SourceDatasetVersionId": 5256696}]
|
[{"Id": 5256696, "DatasetId": 3041726, "DatasourceVersionId": 5329502, "CreatorUserId": 2039603, "LicenseName": "Other (specified in description)", "CreationDate": "03/28/2023 16:00:29", "VersionNumber": 3.0, "Title": "CIFAKE: Real and AI-Generated Synthetic Images", "Slug": "cifake-real-and-ai-generated-synthetic-images", "Subtitle": "Can Computer Vision detect when images have been generated by AI?", "Description": "# CIFAKE: Real and AI-Generated Synthetic Images\nThe quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.\n\nCIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?\n\nFurther information on this dataset can be found here: [Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)\n\n\n\n## Dataset details\nThe dataset contains two classes - REAL and FAKE. \n\nFor REAL, we collected the images from Krizhevsky & Hinton's [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)\n\nFor the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4\n\nThere are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)\n\n## Papers with Code\nThe dataset and all studies using it are linked using [Papers with Code](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)\n[https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)\n\n\n## References\nIf you use this dataset, you **must** cite the following sources\n\n[Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdfl)\n\n[Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)\n\nReal images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2023). The Bird & Lotfi study is a preprint currently available on [ArXiv](https://arxiv.org/abs/2303.14126) and this description will be updated when the paper is published.\n\n## Notes\n\nThe updates to the dataset on the 28th of March 2023 did not change anything; the file formats \".jpeg\" were renamed \".jpg\" and the root folder was uploaded to meet Kaggle's usability requirements.\n\n## License\nThis dataset is published under the [same MIT license as CIFAR-10](https://github.com/wichtounet/cifar-10/blob/master/LICENSE):\n\n*Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:*\n\n*The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.*\n\n*THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*", "VersionNotes": "Kaggle compatibility fix (no actual changes)", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3041726, "CreatorUserId": 2039603, "OwnerUserId": 2039603.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5256696.0, "CurrentDatasourceVersionId": 5329502.0, "ForumId": 3081274, "Type": 2, "CreationDate": "03/24/2023 13:22:42", "LastActivityDate": "03/24/2023", "TotalViews": 13728, "TotalDownloads": 1803, "TotalVotes": 46, "TotalKernels": 15}]
|
[{"Id": 2039603, "UserName": "birdy654", "DisplayName": "Jordan J. Bird", "RegisterDate": "07/03/2018", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import random
import cv2
import os
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.utils import to_categorical, plot_model
IMG_DIMS = (96, 96, 3)
NUM_CLASSES = 2
BATCH_SIZE = 64
EPOCHS = 80
LEARNING_RATE = 1e-3
data = []
labels = []
images_path = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/train"
fake_path = os.path.join(images_path, "FAKE")
real_path = os.path.join(images_path, "REAL")
fake_files = [
os.path.join(fake_path, f)
for f in os.listdir(fake_path)
if os.path.isfile(os.path.join(fake_path, f))
]
real_files = [
os.path.join(real_path, f)
for f in os.listdir(real_path)
if os.path.isfile(os.path.join(real_path, f))
]
image_files = fake_files + real_files
random.shuffle(image_files)
counter = 0
for img in image_files:
if counter >= 12000:
break
image = cv2.imread(img)
image = cv2.resize(image, (IMG_DIMS[0], IMG_DIMS[1]))
image = img_to_array(image)
data.append(image)
label = img.split(os.path.sep)[-2]
if label == "REAL":
label = 1
else:
label = 0
counter += 1
labels.append([label])
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
(trainX, testX, trainY, testY) = train_test_split(
data, labels, test_size=0.3, random_state=101
)
trainY = to_categorical(trainY, num_classes=2)
testY = to_categorical(testY, num_classes=2)
def build_model():
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(32, (3, 3), padding="same", input_shape=IMG_DIMS),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(3, 3)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding="same"),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
# tf.keras.layers.Conv2D(64, (3, 3), padding="same"),
# tf.keras.layers.Activation("relu"),
# tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(128, (3, 3), padding="same"),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(128, (3, 3), padding="same"),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024),
tf.keras.layers.Activation("relu"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(NUM_CLASSES),
tf.keras.layers.Activation("softmax"),
]
)
return model
from tensorflow.keras.losses import CategoricalCrossentropy
cce_loss = CategoricalCrossentropy()
model = build_model()
model.compile(optimizer="adam", loss=cce_loss, metrics=["accuracy"])
model.fit(
trainX,
trainY,
batch_size=64,
validation_data=(testX, testY),
steps_per_epoch=len(trainX) // BATCH_SIZE,
epochs=EPOCHS,
verbose=1,
)
import pandas as pd
loss = pd.DataFrame(model.history.history)
loss.plot()
from sklearn.metrics import classification_report
predictions = model.predict(testX)
predicted_labels = np.argmax(predictions, axis=1)
true_labels = np.argmax(testY, axis=1)
report = classification_report(true_labels, predicted_labels)
print(report)
datas = []
labelss = []
images_paths = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/test"
fake_paths = os.path.join(images_paths, "FAKE")
real_paths = os.path.join(images_paths, "REAL")
fake_file = [
os.path.join(fake_paths, f)
for f in os.listdir(fake_paths)
if os.path.isfile(os.path.join(fake_paths, f))
]
real_file = [
os.path.join(real_paths, f)
for f in os.listdir(real_paths)
if os.path.isfile(os.path.join(real_paths, f))
]
image_file = fake_file + real_file
random.shuffle(image_file)
counter = 0
for img in image_files:
if counter >= 1000:
break
image = cv2.imread(img)
image = cv2.resize(image, (IMG_DIMS[0], IMG_DIMS[1]))
image = img_to_array(image)
datas.append(image)
label = img.split(os.path.sep)[-2]
if label == "REAL":
label = 1
else:
label = 0
counter += 1
labelss.append([label])
datas = np.array(datas, dtype="float") / 255.0
labelss = np.array(labelss)
# labelss = to_categorical(labelss, num_classes=2)
(TrainX, TestX, TrainY, TestY) = train_test_split(
datas, labelss, test_size=0.01, random_state=101
)
TrainY = to_categorical(TrainY, num_classes=2)
from sklearn.metrics import classification_report
pred = model.predict(TrainX)
pred_label = np.argmax(pred, axis=1)
true_label = np.argmax(TrainY, axis=1)
reports = classification_report(true_label, pred_label)
print(reports)
| false | 0 | 1,907 | 0 | 2,950 | 1,907 |
||
129073823
|
<jupyter_start><jupyter_text>ecti2021
Kaggle dataset identifier: ecti2021
<jupyter_script>#
# # Lab_02_Unet_training_flood_assignment_v1_Kaggle
# ## Student: Daniela de los Santos
# Make sure to use a GPU and have access to internet connection in the Kaggle notebook:
# 1. On the three dots on the top left, select "Notebook Options" and then "Accelerator" to choose the GPU P100, and select "Variables & Files" under Persistence. **Note that Kaggle allows 30h per week per user of accelerated computing. Plan your work accordingly. It takes time to load the data and you may experience unavailability of GPUs or Session Errors**
# 1. Make sure your Kaggle account is phone verified by clicking "Get phone verified" in the left sidebar under "Notebook options" and following the steps (this step is required to switch on the internet connection needed to install packages).
# 1. After phone verification, the full settings menu should be visible. Toggle the "Internet" switch.
# More visualizations of the process to connect the notebook to the intern are provided [here](https://stackoverflow.com/questions/68142524/cannot-access-internet-on-kaggle-notebook)
# ## Requirements:
# 1. Downloading the [train data](https://cernbox.cern.ch/s/GtHXqYOzAJnGHPN) and the [val_without_ref_labels.zip](https://cernbox.cern.ch/s/EXHXXinESUxyhFi)
# 1. Go to the "Side Bar", Click on "Data"
# 1. Upload as `ecti2021` the following four files: `train data.zip`, `val_without_ref_labels.zip` , and the `water_tiles.csv` and `background_tiles.csv` from the Lab_01_data_preparation_flood_v1.
# # Step 0: Enviroment setting
# load packages
import os
import sys
import cv2
import numpy as np
import pandas as pd
from glob import glob
import torch.nn as nn
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import segmentation_models_pytorch as smp
# Set up plotting options
import pickle
from pickle import load
import torch
from torch.utils.data import Dataset, DataLoader
# # Step 1: Load the dataset files
# Set path to where dataset is downloaded
dataset_root = (
"/kaggle/input/ecti2021" # set accordingly based on how you uploaded the data
)
# get number of training/validation regions
train_dir = os.path.join(dataset_root, "train/train")
test_dir = os.path.join(dataset_root, "val_without_ref_labels/val")
n_train_regions = len(glob(train_dir + "/*/"))
n_test_regions = len(glob(test_dir + "/*/"))
# NOTE: make sure number of regions is NOT 0, otherwise it might be that the code is not able to read the data.
print("Number of training temporal-regions: {}".format(n_train_regions))
print("Number of test temporal-regions: {}".format(n_test_regions))
# From the Lab_01_data_preparation_flood_v1, we indentified that the ETCI 2021 Competition on Flood Detection is composed by 33'405 tiles. However, we also identified tiles that have empty VV/VH but have a non-zero label. We already excluded this tiles when saving the `water_tiles.csv` and `background_tiles.csv`. The dataset used in this notebook should contain 27'214 tiles.
# ## Utils functions
def visualize(df_row, figsize=[25, 15]):
# get image paths
vv_image_path = df_row["vv_image_path"]
vh_image_path = df_row["vh_image_path"]
flood_label_path = df_row["flood_label_path"]
water_body_label_path = df_row["water_body_label_path"]
# create RGB image from S1 images
rgb_name = get_filename(vv_image_path)
vv_image = cv2.imread(vv_image_path, 0) / 255.0
vh_image = cv2.imread(vh_image_path, 0) / 255.0
rgb_image = s1_to_rgb(vv_image, vh_image)
# get water body label mask
water_body_label_image = cv2.imread(water_body_label_path, 0) / 255.0
# plot images
plt.figure(figsize=tuple(figsize))
if df_row.isnull().sum() > 0:
# plot RGB S1 image
plt.subplot(1, 2, 1)
plt.imshow(rgb_image)
plt.title(rgb_name)
# plot water body mask
plt.subplot(1, 2, 2)
plt.imshow(water_body_label_image)
plt.title("Water body mask")
else:
flood_label_image = cv2.imread(flood_label_path, 0) / 255.0
# plot RGB S1 image
plt.subplot(1, 3, 1)
plt.imshow(rgb_image)
plt.title(rgb_name)
# plot flood label mask
plt.subplot(1, 3, 2)
plt.imshow(flood_label_image)
plt.title("Flood mask")
# plot water body mask
plt.subplot(1, 3, 3)
plt.imshow(water_body_label_image)
plt.title("Water body mask")
def s1_to_rgb(vv_image, vh_image):
eps = 1e-06
ratio_image = np.clip(
np.nan_to_num(vv_image / (vh_image + eps), 0), 0, 1
) # outside [0,1] will be clipped
rgb_image = np.stack(
(vv_image, vh_image, ratio_image), axis=2
) # different from lab01: np.abs(red) / np.abs(green)
return rgb_image
def visualize_result(df_row, prediction, figsize=[25, 15]):
vv_image = cv2.imread(df_row["vv_image_path"], 0) / 255.0
vh_image = cv2.imread(df_row["vh_image_path"], 0) / 255.0
rgb_input = s1_to_rgb(vv_image, vh_image)
plt.figure(figsize=tuple(figsize))
plt.subplot(1, 2, 1)
plt.imshow(rgb_input)
plt.title("RGB w/ result")
plt.subplot(1, 2, 2)
plt.imshow(prediction)
plt.title("Result")
# # Step 2: Create training dataframes
def get_filename(filepath, split_symbol="/"):
return filepath.split(split_symbol)[-1]
def read_csv(csvpath, split_symbol="/"):
path_list = np.loadtxt(csvpath, delimiter=" ", dtype=str).tolist()
return [get_filename(pth, split_symbol) for pth in path_list]
water_image_names = read_csv(
"/kaggle/input/ecti2021/water_tiles.csv"
) # from lab01 make sure the path is correct
background_image_names = read_csv("/kaggle/input/ecti2021/background_tiles.csv")
region_name_dates0 = ["_".join(n.split("_")[:2]) for n in water_image_names]
region_name_dates1 = ["_".join(n.split("_")[:2]) for n in background_image_names]
vv_image_paths, vh_image_paths, flood_label_paths, water_body_label_paths = (
[],
[],
[],
[],
)
water_image_paths, background_image_paths = [], []
for i in range(len(water_image_names)):
vv_image_path = os.path.join(
train_dir, region_name_dates0[i], "tiles", "vv", water_image_names[i]
)
vv_image_paths.append(vv_image_path)
water_image_paths.append(vv_image_path)
# get vh image path
vh_image_name = water_image_names[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates0[i], "tiles", "vh", vh_image_name
)
vh_image_paths.append(vh_image_path)
# get flood mask path
flood_image_name = water_image_names[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates0[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths.append(flood_label_path)
# get water body mask path
water_body_label_name = water_image_names[i].replace("_vv", "")
water_body_label_path = os.path.join(
train_dir,
region_name_dates0[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths.append(water_body_label_path)
for i in range(len(background_image_names)):
vv_image_path = os.path.join(
train_dir, region_name_dates1[i], "tiles", "vv", background_image_names[i]
)
vv_image_paths.append(vv_image_path)
background_image_paths.append(vv_image_path)
# get vh image path
vh_image_name = background_image_names[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates1[i], "tiles", "vh", vh_image_name
)
vh_image_paths.append(vh_image_path)
# get flood mask path
flood_image_name = background_image_names[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates1[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths.append(flood_label_path)
# get water body mask path
water_body_label_name = background_image_names[i].replace("_vv", "")
water_body_label_path = os.path.join(
train_dir,
region_name_dates1[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths.append(water_body_label_path)
water_image_names[0]
# Shuffle the index and then split in train and validation
n = len(vv_image_paths) # number of images in the dataset
arr = np.arange(n) # array 0...n-1
np.random.shuffle(arr) # shuffle it
train_idx = arr[0 : int(np.round(0.80 * n))] # 80% train
valid_idx = arr[int(np.round(0.80 * n)) :] # 20% validation
print("Number of tiles in training set:", train_idx.size)
print("Number of tiles in validation set:", valid_idx.size)
print(
"Number of tiles in the training and validation set:",
train_idx.size + valid_idx.size,
)
vv_image_paths_train = list(np.array(vv_image_paths)[train_idx])
vh_image_paths_train = list(np.array(vh_image_paths)[train_idx])
flood_label_paths_train = list(np.array(flood_label_paths)[train_idx])
water_body_label_paths_train = list(np.array(water_body_label_paths)[train_idx])
train_paths = {
"vv_image_path": vv_image_paths_train,
"vh_image_path": vh_image_paths_train,
"flood_label_path": flood_label_paths_train,
"water_body_label_path": water_body_label_paths_train,
}
train_df = pd.DataFrame(train_paths)
print(train_df.shape)
train_df.head()
vv_image_paths_valid = list(np.array(vv_image_paths)[valid_idx])
vh_image_paths_valid = list(np.array(vh_image_paths)[valid_idx])
flood_label_paths_valid = list(np.array(flood_label_paths)[valid_idx])
water_body_label_paths_valid = list(np.array(water_body_label_paths)[valid_idx])
valid_paths = {
"vv_image_path": vv_image_paths_valid,
"vh_image_path": vh_image_paths_valid,
"flood_label_path": flood_label_paths_valid,
"water_body_label_path": water_body_label_paths_valid,
}
valid_df = pd.DataFrame(valid_paths)
print(valid_df.shape)
valid_df.head()
# ## # Step 2b: Create training undersampled dataframes
background_image_paths_train = [
path for path in background_image_paths if path in vv_image_paths_train
]
background_num_train = len(background_image_paths_train)
print("Number of background tiles included in training:", background_num_train)
water_image_paths_train = [
path for path in water_image_paths if path in vv_image_paths_train
]
water_image_names_train = [get_filename(pth) for pth in water_image_paths_train]
region_name_dates2 = ["_".join(n.split("_")[:2]) for n in water_image_names_train]
water_num_train = len(water_image_paths_train)
print("Number of water tiles included in training:", water_num_train)
num_samples = int(water_num_train * 0.15) # include 15% of water tiles
arr = np.arange(int(water_num_train * 0.15)) # array 0...n-1
np.random.shuffle(arr) # shuffle it
background_image_paths_train_undersampled = list(
np.array(background_image_paths_train)[arr[0:num_samples]]
)
background_image_names_train_undersampled = [
get_filename(pth) for pth in background_image_paths_train_undersampled
]
print(
"Number of background tiles included in training after undersampling:",
len(background_image_names_train_undersampled),
)
region_name_dates3 = [
"_".join(n.split("_")[:2]) for n in background_image_names_train_undersampled
]
(
vh_image_paths_train_undersampled,
flood_label_paths_train_undersampled,
water_body_label_paths_train_undersampled,
) = ([], [], [])
for i in range(len(water_image_names_train)):
# get vh image path
vh_image_name = water_image_names_train[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates2[i], "tiles", "vh", vh_image_name
)
vh_image_paths_train_undersampled.append(vh_image_path)
# get flood mask path
flood_image_name = water_image_names_train[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates2[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths_train_undersampled.append(flood_label_path)
# get water body mask path
water_body_label_name = water_image_names_train[i].replace("_vv", "")
water_body_label_path = os.path.join(
train_dir,
region_name_dates2[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths_train_undersampled.append(water_body_label_path)
vv_image_paths_train_undersampled = water_image_paths_train
print(
"Number of water body label included in training after undersampling:",
len(water_body_label_paths_train_undersampled),
)
for i in range(len(background_image_names_train_undersampled)):
vv_image_paths_train_undersampled.append(
background_image_paths_train_undersampled[i]
)
# get vh image path
vh_image_name = background_image_names_train_undersampled[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates3[i], "tiles", "vh", vh_image_name
)
vh_image_paths_train_undersampled.append(vh_image_path)
# get flood mask path
flood_image_name = background_image_names_train_undersampled[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates3[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths_train_undersampled.append(flood_label_path)
# get water body mask path
water_body_label_name = background_image_names_train_undersampled[i].replace(
"_vv", ""
)
water_body_label_path = os.path.join(
train_dir,
region_name_dates3[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths_train_undersampled.append(water_body_label_path)
assert (
len(vv_image_paths_train_undersampled)
== len(vh_image_paths_train_undersampled)
== len(flood_label_paths_train_undersampled)
== len(water_body_label_paths_train_undersampled)
)
print(
"Number of overall images included in training after undersampling:",
len(water_body_label_paths_train_undersampled),
)
# **1) Based on the consideration done in Lab1 and the above calculation, explain the original dataset is in term of class imbalance and how this changed in the undersample dataset.**
# **ANSWER HERE:**
# In Lab1, we had obtained an imbalanced class distribution. At the pixel level, 98% corresponded to background pixels. At the tile level, even after excluding blank images, the background tiles were still nearly 62% of the total.
# Here, in the undersample dataset, this class imbalance problem is addressed by including only a subset of the available background tiles. Particularly, this snipped of code: `num_samples = int(water_num_train * 0.15)`, computes the number of background tiles to include in the undersampled training set, setting it to 15% of the total number of water tiles in the original training set. That number is then selected from a shuffled array. On the other hand, the number of water tiles/water body labels stays the same after undersampling (8323). The code below provides a visualization similar to the ones used in Lab1, showing the new class distribution.
# Define the class labels and counts
classes = ["Water Tiles", "Background Tiles"]
counts = [
np.size(water_body_label_paths_train_undersampled),
np.size(background_image_names_train_undersampled),
]
# Create a bar chart
plt.bar(classes, counts)
total = sum(counts)
percentages = [(count / total) * 100 for count in counts]
# Add a title and axis labels
plt.title("Class Distribution at the tile level after undersampling")
plt.xlabel("Class")
plt.ylabel("Count")
for i, count in enumerate(counts):
percentage = (count / total) * 100
plt.text(i, count + 10, f"{percentage:.1f}%", ha="center")
# Show the plot
plt.show()
train_paths_undersample = {
"vv_image_path": vv_image_paths_train_undersampled,
"vh_image_path": vh_image_paths_train_undersampled,
"flood_label_path": flood_label_paths_train_undersampled,
"water_body_label_path": water_body_label_paths_train_undersampled,
}
train_df_undersample = pd.DataFrame(train_paths_undersample)
print(train_df_undersample.shape)
train_df_undersample.head()
# # Step 3: Visualize images
# This section is meant to be used to experiment the data. Feel free to change things and to explore the data.
train_df
cv2.imread(train_df_undersample.iloc[1200]["vv_image_path"], 0)
train_df_undersample.iloc[3600]["vv_image_path"]
visualize(train_df_undersample.iloc[3600])
visualize(train_df.iloc[3677])
# # Step 4: Setup the dataset for machine learning
# Since the Phase 1 (Development phase) of the ETCI 2021 Competition on Flood Detection provided with training data (which includes reference data) and a validation data (without reference data until phase 1 concludes),we will split our training dataset (that contains flood masks) into a smaller training and development set.
# ### Create a PyTorch dataset
# We will be using the PyTorch deep learning library to format this dataset and create our machine learning model. Therefore we will need to create a custom Dataset class and pass it into a DataLoader object (see the [PyTorch Dataset Tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) for more detail on the topic). To compute image transformations we will use the [Albumentations](https://github.com/albumentations-team/albumentations) package.
class ETCIDataset(Dataset):
def __init__(self, dataframe, split, transform=None):
self.split = split
self.dataset = dataframe
self.transform = transform
def __len__(self):
return self.dataset.shape[0]
def __getitem__(self, index):
example = {}
df_row = self.dataset.iloc[index]
# load vv and vh images
vv_image = cv2.imread(df_row["vv_image_path"], 0) / 255.0
vh_image = cv2.imread(df_row["vh_image_path"], 0) / 255.0
# convert vv and vh images to rgb
rgb_image = s1_to_rgb(vv_image, vh_image)
if self.split == "test":
# no flood mask should be available
example["image"] = rgb_image.transpose((2, 0, 1)).astype(
"float32"
) # HWC->CHW
else:
# load ground truth flood mask
flood_mask = cv2.imread(df_row["flood_label_path"], 0) / 255.0
# compute transformations
if self.transform:
augmented = self.transform(image=rgb_image, mask=flood_mask)
rgb_image = augmented["image"]
flood_mask = augmented["mask"]
example["image"] = rgb_image.transpose((2, 0, 1)).astype(
"float32"
) # HWC->CHW
example["mask"] = flood_mask.astype("int64")
return example
# **2) Check the [Albumentations](https://github.com/albumentations-team/albumentations) package and implement both Vertical and Horizontal flip with probability 0.5 and RandomResizedCrop of 256 on both dimentions.**
import albumentations as A
### BEGINNING OF THE CODE
transform = A.Compose(
[
A.RandomResizedCrop(height=256, width=256),
A.OneOf([A.HorizontalFlip(p=1), A.VerticalFlip(p=1)], p=0.5),
]
)
# Apply the transforms to an image
image = cv2.imread(train_df_undersample.iloc[3600]["vv_image_path"])
transformed = transform(image=image)
transformed_image = transformed["image"]
# Display original and transformed images side by side
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
ax[0].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
ax[0].set_title("Original")
ax[1].imshow(cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB))
ax[1].set_title("Transformed")
plt.show()
####END OF THE CODE
train_dataset = ETCIDataset(train_df, split="train", transform=transform)
valid_dataset = ETCIDataset(valid_df, split="valid", transform=None)
print("Trainining set size:", len(train_dataset))
print("Validation set size:", len(valid_dataset))
batch_size = 64
train_loader = DataLoader(
train_dataset, batch_size=batch_size, shuffle=True, num_workers=2, pin_memory=True
)
valid_loader = DataLoader(
valid_dataset, batch_size=batch_size, shuffle=False, num_workers=2, pin_memory=True
)
train_undersampled_dataset = ETCIDataset(
train_df_undersample, split="train", transform=transform
)
train_undersampled_loader = DataLoader(
train_undersampled_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=2,
pin_memory=True,
)
print("Undersampled Trainining set size:", len(train_undersampled_dataset))
# # Step 5: Deep learning model creation
# ### Select hardware to train model
device = "cuda"
# **3) We will grab a segmentation model from the [Segmentation Models](https://github.com/qubvel/segmentation_models.pytorch) package ([documentation here](https://smp.readthedocs.io/en/latest/)). Read carfully the documentation and implement a UNet with resnet34 as encoder, without any pre-trained weights, and the appropriate number of in_channel and classes based on the dataset.**
import segmentation_models_pytorch as smp
def create_model(in_channels=3, num_classes=2):
model = smp.Unet(
encoder_name="resnet34",
encoder_weights=None,
in_channels=in_channels, # Because we are working with RBG images, we set in_channels=3
classes=num_classes,
)
return model
model_1 = create_model()
model_1.to(device) # load model into GPU memory
# ### Metric tracker
from sklearn.metrics import confusion_matrix
class EvalTracker:
def __init__(self, n_classes=2, smooth=0.0001):
self.n_classes = n_classes
self.reset()
self.smooth = smooth
def reset(self):
self.cm = np.zeros((self.n_classes, self.n_classes))
self.count = 0
def update(self, pred, target):
# pred: [B, 2, H, W]
# target: [B, H, W]
self.count += pred.shape[0]
# reshape inputs
pred = pred.argmax(dim=1).flatten() # [B*H*W]
target = target.flatten() # [B*H*W]
# put into cpu memory
pred = pred.detach().cpu().numpy()
target = target.detach().cpu().numpy()
# compute confusion matrix values
self.cm += confusion_matrix(target, pred)
def get_mean(self):
tn, fp, fn, tp = self.cm.ravel()
# compute IoU
iou = tp / (tp + fp + fn + self.smooth)
prec = tp / (tp + fp + self.smooth)
rec = tp / (tp + fn + self.smooth)
f1 = 2.0 * prec * rec / (prec + rec)
return iou, prec, rec, f1
# # Step 6: Train model on the full dataset
# ### Model config, optimizer and loss function
# set the number of times you want the model to see all of the training data
epochs = 10
learning_rate = 1e-4
optimizer = torch.optim.Adam(model_1.parameters(), lr=learning_rate)
criteria_no_weights = nn.CrossEntropyLoss(weight=None)
# ### Training loop
train_loss_dict = {}
val_loss_dict = {}
for epoch in range(epochs):
print("Epoch: [{}/{}]".format(epoch + 1, epochs))
# train set
pbar = tqdm(train_loader)
train_loss = 0.0
model_1.train()
eval_logger = EvalTracker()
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_1(image)
# get loss
loss = criteria_no_weights(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
train_loss += loss.item() * image.size(0)
# calculate the average loss for both training and validation
train_loss /= len(train_loader.dataset)
train_loss_dict[epoch] = train_loss
# valid set
pbar = tqdm(valid_loader)
model_1.eval()
eval_logger = EvalTracker()
val_loss = 0.0
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_1(image)
# get loss
loss = criteria_no_weights(pred, mask)
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
val_loss += loss.item() * image.size(0)
val_loss /= len(valid_loader.dataset)
val_loss_dict[epoch] = val_loss
# Save the training loss values
with open("./train_loss_1_1_BCE.pkl", "wb") as file:
pickle.dump(train_loss_dict, file)
# Save the validation loss values
with open("./val_loss_1_1_BCE.pkl", "wb") as file:
pickle.dump(val_loss_dict, file)
# save model
torch.save(model_1.state_dict(), "model_1_BCE.pt")
# ### Plot Losses
# Load the training and validation loss dictionaries
train_loss = load(open("/kaggle/working/train_loss_1_1_BCE.pkl", "rb"))
val_loss = load(open("/kaggle/working/val_loss_1_1_BCE.pkl", "rb"))
# Retrieve each dictionary's values
train_values = train_loss.values()
val_values = val_loss.values()
# Generate a sequence of integers to represent the epoch numbers
epochs_range = range(1, epochs + 1)
# Plot and label the training and validation loss values
plt.plot(epochs_range, train_values, label="Training Loss")
plt.plot(epochs_range, val_values, label="Validation Loss")
# Add in a title and axes labels
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
# Set the tick locations
plt.xticks(np.arange(0, epochs + 1, 2))
# Display the plot
plt.legend(loc="best")
plt.show()
# # Step 7: Train model on the undersampled dataset
# ### Model config, optimizer and loss function
batch_size = 64
epochs = 10
learning_rate = 1e-4
model_2 = create_model()
model_2.to(device)
optimizer = torch.optim.Adam(model_2.parameters(), lr=learning_rate)
criteria_no_weights = nn.CrossEntropyLoss(weight=None)
# **4) Implement a training loop similar to the one above but for the undersampled dataset. Use model_2 to avoid any overwriting of the previous model. Save the model as 'model_2d_BCE.pt'***
# ### Training loop
### CODE HERE###
train_loss_dict_2 = {}
val_loss_dict_2 = {}
for epoch in range(epochs):
print("Epoch: [{}/{}]".format(epoch + 1, epochs))
# train set
pbar = tqdm(train_undersampled_loader)
train_loss = 0.0
model_2.train()
eval_logger = EvalTracker()
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_2(image)
# get loss
loss = criteria_no_weights(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
train_loss += loss.item() * image.size(0)
# calculate the average loss for both training and validation
train_loss /= len(train_undersampled_loader.dataset)
train_loss_dict_2[epoch] = train_loss
# valid set
pbar = tqdm(valid_loader)
model_2.eval()
eval_logger = EvalTracker()
val_loss = 0.0
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_2(image)
# get loss
loss = criteria_no_weights(pred, mask)
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
val_loss += loss.item() * image.size(0)
val_loss /= len(valid_loader.dataset)
val_loss_dict_2[epoch] = val_loss
# Save the training loss values
with open("./train_loss_2_BCE.pkl", "wb") as file:
pickle.dump(train_loss_dict_2, file)
# Save the validation loss values
with open("./val_loss_2_BCE.pkl", "wb") as file:
pickle.dump(val_loss_dict_2, file)
# save model
torch.save(model_2.state_dict(), "model_2_BCE.pt")
train_path = r"train_df.csv"
valid_path = r"valid_df.csv"
train_under_path = r"train_df_undersample.csv"
train_df.to_csv(train_path)
valid_df.to_csv(valid_path)
train_df_undersample.to_csv(train_under_path)
# ### Plot Losses
# Load the training and validation loss dictionaries
train_loss = load(open("/kaggle/working/train_loss_2_BCE.pkl", "rb"))
val_loss = load(open("/kaggle/working/val_loss_2_BCE.pkl", "rb"))
# Retrieve each dictionary's values
train_values = train_loss.values()
val_values = val_loss.values()
# Generate a sequence of integers to represent the epoch numbers
epochs_range = range(1, epochs + 1)
# Plot and label the training and validation loss values
plt.plot(epochs_range, train_values, label="Training Loss")
plt.plot(epochs_range, val_values, label="Validation Loss")
# Add in a title and axes labels
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
# Set the tick locations
plt.xticks(np.arange(0, epochs + 1, 2))
# Display the plot
plt.legend(loc="best")
plt.show()
# # Step 8: Train model on the undersampled dataset with a weighted loss function
# ### Defining the split for the weighted Cross Entropy Loss
# It take quite a long time to calcualte, the ratio is around 5% flooded pixels, 95% background
n_size = len(flood_label_paths_train_undersampled)
n_flooded = np.ndarray(
(n_size,),
)
for i in tqdm(range(n_size)):
flood_label = cv2.imread(flood_label_paths_train_undersampled[i], 0)
n_flooded[i] = np.sum(flood_label) / 255
n_flooded_ratio = np.divide(n_flooded, 256 * 256)
flooded_pixels = np.sum(n_flooded)
background_pixels = 256 * 256 * n_size - np.sum(n_flooded)
print("Flooded Pixels:", flooded_pixels)
print("Background Pixels:", background_pixels)
print("Ratio:", np.mean(n_flooded_ratio))
# ### Model config, optimizer and loss function
# **5) Define the "Model config, optimizer and loss function" section as previously done but for "model_3" which will be trained with a [Weighted Cross Entropy Loss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html). Remember to store the weights as a torch tensor, to load it in the GPU, and be careful on the order of your weights.**
###CODE HERE
# Define the model configuration
model_3 = create_model()
model_3.to(device)
learning_rate = 1e-4
optimizer = torch.optim.Adam(model_3.parameters(), lr=learning_rate)
# define loss function with weights
weights = torch.tensor([0.0462, (1 - 0.0462)])
weights = weights.to(device)
criteria_weights = nn.CrossEntropyLoss(weight=weights)
# **6) Why did you choose the weights you used for the CrossEntropyLoss?**
# **ANSWER HERE:**
# The choice of class weights for the weighted cross-entropy loss is in this case a strategy to tackle imbalance issues in the dataset. I set the weight for the background to 0.05 and for the flooded pixels to 0.95, thus reversing the original relationship of the distribution. This means that we are giving more weight and importance to the class we are interested in, as it is is more important in the segmentation task than the first class.
# ### Training Loop
train_loss_dict = {}
val_loss_dict = {}
for epoch in range(epochs):
print("Epoch: [{}/{}]".format(epoch + 1, epochs))
# train set
pbar = tqdm(train_undersampled_loader)
train_loss = 0.0
model_3.train()
eval_logger = EvalTracker()
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_3(image)
# get loss
loss = criteria_weights(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
train_loss += loss.item() * image.size(0)
# calculate the average loss for both training and validation
train_loss /= len(train_undersampled_loader.dataset)
train_loss_dict[epoch] = train_loss
# valid set
pbar = tqdm(valid_loader)
val_loss = 0.0
model_3.eval()
eval_logger = EvalTracker()
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_3(image)
# get loss
loss = criteria_weights(pred, mask)
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
val_loss += loss.item() * image.size(0)
val_loss /= len(valid_loader.dataset)
val_loss_dict[epoch] = val_loss
# Save the training loss values
with open("./train_loss_2d_WBCE.pkl", "wb") as file:
pickle.dump(train_loss_dict, file)
# Save the validation loss values
with open("./val_loss_2d_WBCE.pkl", "wb") as file:
pickle.dump(val_loss_dict, file)
# save model
torch.save(model_3.state_dict(), "model_2d_WBCE.pt")
# ### Plot Losses
from numpy import *
from pickle import load
# Load the training and validation loss dictionaries
train_loss = load(open("/kaggle/working/train_loss_2d_WBCE.pkl", "rb"))
val_loss = load(open("/kaggle/working/val_loss_2d_WBCE.pkl", "rb"))
# Retrieve each dictionary's values
train_values = train_loss.values()
val_values = val_loss.values()
# Generate a sequence of integers to represent the epoch numbers
epochs_range = range(1, epochs + 1)
# Plot and label the training and validation loss values
plt.plot(epochs_range, train_values, label="Training Loss")
plt.plot(epochs_range, val_values, label="Validation Loss")
# Add in a title and axes labels
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
# Set the tick locations
plt.xticks(np.arange(0, epochs + 1, 2))
# Display the plot
plt.legend(loc="best")
plt.show()
# **7) How are the three models (model_1_BCE.pt, model_2d_BCE.pt, and model_2d_WBCE.pt) performning? Comment the performances of the models.**
# **ANSWER HERE:**
# We can analyse and compare the performance of the models based on the performance metrics of the last Epoch. The table below summarizes the results.
# | Model | Loss | MIoU | Precision | Recall | F1 score |
# |---------------------------------------------------- |-------- |-------- |----------- |-------- |---------- |
# | Full sample | 0.0054 | 0.5933 | 0.8283 | 0.6766 | 0.7448 |
# | Undersampled dataset | 0.0182 | 0.5958 | 0.8036 | 0.6973 | 0.7467 |
# | Undersampled dataset with a weighted loss function | 0.4686 | 0.0838 | 0.0842 | 0.9497 | 0.1547 |
# Based on the given metrics, it appears that the first two models (Full sample and Undersampled dataset) perform similarly and better than the third model (Undersampled dataset with a weighted loss function). The first two models have similar values for MIoU, Precision, Recall, and F1 score, with the Undersampled dataset model having slightly higher MIoU and Recall but slightly lower Precision and F1 score. The third model has much lower values for Loss, MIoU, Precision, and F1 score but a much higher value for Recall.
# In general, which model is better depends on the specific problem. For example, if high precision is important, then the Full sample model may be preferred. If high recall is important, then the Undersampled dataset model may be preferred. In our exercise, there is a clear tradeoff. Overlooking false positives (that is, considering an area is flooded when in fact it is not) might incur in inflating the costs to be faced by policymakers. On the other hand, overlooking false negatives (that is, ignoring some areas that are flooded and treat them as if they are not) might have negative effects from a humans right approach, as the cost to get help to flooded areas could be underestimated, and the strategy to cover those areas, get adequate aid, etc., might be suboptimal. In this context, we might want to give more importance to the **recall** metric: $\frac{TP}{(TP+FN)}$.
# Even though the weighted loss function model has the highest recall, the rest of the metrics perform very poorly, which makes it a problematic choice. In that sense, the model with the undersampled dataset without weights might be a better choice. I will use this model in Step 10.
# # Step 10: Test models
# ### Create a test dataset
# Let's create Dataset and DataLoader objects for the validation set. This time we will not have labels.
vv_image_paths = sorted(glob(test_dir + "/**/vv/*.png", recursive=True))
vv_image_names = [get_filename(pth) for pth in vv_image_paths]
region_name_dates = ["_".join(n.split("_")[:2]) for n in vv_image_names]
vh_image_paths, flood_label_paths, water_body_label_paths, region_names = [], [], [], []
for i in range(len(vv_image_paths)):
# get vh image path
vh_image_name = vv_image_names[i].replace("vv", "vh")
vh_image_path = os.path.join(
test_dir, region_name_dates[i], "tiles", "vh", vh_image_name
)
vh_image_paths.append(vh_image_path)
# get flood mask path ()
flood_label_paths.append(np.NaN)
# get water body mask path
water_body_label_name = vv_image_names[i].replace("_vv", "")
water_body_label_path = os.path.join(
test_dir,
region_name_dates[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths.append(water_body_label_path)
# get region name
region_name = region_name_dates[i].split("_")[0]
region_names.append(region_name)
test_paths = {
"vv_image_path": vv_image_paths,
"vh_image_path": vh_image_paths,
"flood_label_path": flood_label_paths,
"water_body_label_path": water_body_label_paths,
"region": region_names,
}
test_df = pd.DataFrame(valid_paths)
print(test_df.shape)
test_df.head()
# ### Run inference
# **8) Choose your best performing model from Steps 6-9 and run inference below:**
# load model
model_test = create_model()
model_test.load_state_dict(
torch.load("/kaggle/working/model_2_BCE.pt")
) # CHANGE THE MODEL HERE. Default set as model_1_BCE.pt
model_test.to(device)
test_dataset = ETCIDataset(test_df, split="test", transform=None)
test_loader = DataLoader(
test_dataset, batch_size=batch_size, shuffle=False, num_workers=2, pin_memory=True
) # make sure shuffle is False
final_predictions = []
model_test.eval()
with torch.no_grad():
for batch in tqdm(test_loader):
# load image and mask into device memory
image = batch["image"].to(device)
# pass images into model
pred = model_test(image)
# compute class predictions, i.e. flood or no-flood
class_pred = pred.argmax(dim=1)
# convert class prediction to numpy
class_pred = class_pred.detach().cpu().numpy()
# add to final predictions
final_predictions.append(class_pred.astype("uint8"))
final_predictions = np.concatenate(final_predictions, axis=0)
# check final prediction shape
print(final_predictions.shape)
# ### Visualize some results
index = 252
visualize_result(valid_df.iloc[index], final_predictions[index], figsize=(17, 10))
index = -100
visualize_result(valid_df.iloc[index], final_predictions[index], figsize=(17, 10))
index = 1910
visualize_result(valid_df.iloc[index], final_predictions[index], figsize=(17, 10))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/073/129073823.ipynb
|
ecti2021
|
luisquinones41
|
[{"Id": 129073823, "ScriptId": 38361828, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15040553, "CreationDate": "05/10/2023 19:17:21", "VersionNumber": 1.0, "Title": "hw3-newversion", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 1000.0, "LinesInsertedFromPrevious": 1000.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184806190, "KernelVersionId": 129073823, "SourceDatasetVersionId": 5626554}]
|
[{"Id": 5626554, "DatasetId": 3234863, "DatasourceVersionId": 5701774, "CreatorUserId": 12136140, "LicenseName": "Unknown", "CreationDate": "05/07/2023 15:07:02", "VersionNumber": 3.0, "Title": "ecti2021", "Slug": "ecti2021", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-05-07", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3234863, "CreatorUserId": 12136140, "OwnerUserId": 12136140.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5626554.0, "CurrentDatasourceVersionId": 5701774.0, "ForumId": 3300026, "Type": 2, "CreationDate": "05/07/2023 14:14:14", "LastActivityDate": "05/07/2023", "TotalViews": 105, "TotalDownloads": 6, "TotalVotes": 0, "TotalKernels": 3}]
|
[{"Id": 12136140, "UserName": "luisquinones41", "DisplayName": "Luis Qui\u00f1ones", "RegisterDate": "10/28/2022", "PerformanceTier": 0}]
|
#
# # Lab_02_Unet_training_flood_assignment_v1_Kaggle
# ## Student: Daniela de los Santos
# Make sure to use a GPU and have access to internet connection in the Kaggle notebook:
# 1. On the three dots on the top left, select "Notebook Options" and then "Accelerator" to choose the GPU P100, and select "Variables & Files" under Persistence. **Note that Kaggle allows 30h per week per user of accelerated computing. Plan your work accordingly. It takes time to load the data and you may experience unavailability of GPUs or Session Errors**
# 1. Make sure your Kaggle account is phone verified by clicking "Get phone verified" in the left sidebar under "Notebook options" and following the steps (this step is required to switch on the internet connection needed to install packages).
# 1. After phone verification, the full settings menu should be visible. Toggle the "Internet" switch.
# More visualizations of the process to connect the notebook to the intern are provided [here](https://stackoverflow.com/questions/68142524/cannot-access-internet-on-kaggle-notebook)
# ## Requirements:
# 1. Downloading the [train data](https://cernbox.cern.ch/s/GtHXqYOzAJnGHPN) and the [val_without_ref_labels.zip](https://cernbox.cern.ch/s/EXHXXinESUxyhFi)
# 1. Go to the "Side Bar", Click on "Data"
# 1. Upload as `ecti2021` the following four files: `train data.zip`, `val_without_ref_labels.zip` , and the `water_tiles.csv` and `background_tiles.csv` from the Lab_01_data_preparation_flood_v1.
# # Step 0: Enviroment setting
# load packages
import os
import sys
import cv2
import numpy as np
import pandas as pd
from glob import glob
import torch.nn as nn
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import segmentation_models_pytorch as smp
# Set up plotting options
import pickle
from pickle import load
import torch
from torch.utils.data import Dataset, DataLoader
# # Step 1: Load the dataset files
# Set path to where dataset is downloaded
dataset_root = (
"/kaggle/input/ecti2021" # set accordingly based on how you uploaded the data
)
# get number of training/validation regions
train_dir = os.path.join(dataset_root, "train/train")
test_dir = os.path.join(dataset_root, "val_without_ref_labels/val")
n_train_regions = len(glob(train_dir + "/*/"))
n_test_regions = len(glob(test_dir + "/*/"))
# NOTE: make sure number of regions is NOT 0, otherwise it might be that the code is not able to read the data.
print("Number of training temporal-regions: {}".format(n_train_regions))
print("Number of test temporal-regions: {}".format(n_test_regions))
# From the Lab_01_data_preparation_flood_v1, we indentified that the ETCI 2021 Competition on Flood Detection is composed by 33'405 tiles. However, we also identified tiles that have empty VV/VH but have a non-zero label. We already excluded this tiles when saving the `water_tiles.csv` and `background_tiles.csv`. The dataset used in this notebook should contain 27'214 tiles.
# ## Utils functions
def visualize(df_row, figsize=[25, 15]):
# get image paths
vv_image_path = df_row["vv_image_path"]
vh_image_path = df_row["vh_image_path"]
flood_label_path = df_row["flood_label_path"]
water_body_label_path = df_row["water_body_label_path"]
# create RGB image from S1 images
rgb_name = get_filename(vv_image_path)
vv_image = cv2.imread(vv_image_path, 0) / 255.0
vh_image = cv2.imread(vh_image_path, 0) / 255.0
rgb_image = s1_to_rgb(vv_image, vh_image)
# get water body label mask
water_body_label_image = cv2.imread(water_body_label_path, 0) / 255.0
# plot images
plt.figure(figsize=tuple(figsize))
if df_row.isnull().sum() > 0:
# plot RGB S1 image
plt.subplot(1, 2, 1)
plt.imshow(rgb_image)
plt.title(rgb_name)
# plot water body mask
plt.subplot(1, 2, 2)
plt.imshow(water_body_label_image)
plt.title("Water body mask")
else:
flood_label_image = cv2.imread(flood_label_path, 0) / 255.0
# plot RGB S1 image
plt.subplot(1, 3, 1)
plt.imshow(rgb_image)
plt.title(rgb_name)
# plot flood label mask
plt.subplot(1, 3, 2)
plt.imshow(flood_label_image)
plt.title("Flood mask")
# plot water body mask
plt.subplot(1, 3, 3)
plt.imshow(water_body_label_image)
plt.title("Water body mask")
def s1_to_rgb(vv_image, vh_image):
eps = 1e-06
ratio_image = np.clip(
np.nan_to_num(vv_image / (vh_image + eps), 0), 0, 1
) # outside [0,1] will be clipped
rgb_image = np.stack(
(vv_image, vh_image, ratio_image), axis=2
) # different from lab01: np.abs(red) / np.abs(green)
return rgb_image
def visualize_result(df_row, prediction, figsize=[25, 15]):
vv_image = cv2.imread(df_row["vv_image_path"], 0) / 255.0
vh_image = cv2.imread(df_row["vh_image_path"], 0) / 255.0
rgb_input = s1_to_rgb(vv_image, vh_image)
plt.figure(figsize=tuple(figsize))
plt.subplot(1, 2, 1)
plt.imshow(rgb_input)
plt.title("RGB w/ result")
plt.subplot(1, 2, 2)
plt.imshow(prediction)
plt.title("Result")
# # Step 2: Create training dataframes
def get_filename(filepath, split_symbol="/"):
return filepath.split(split_symbol)[-1]
def read_csv(csvpath, split_symbol="/"):
path_list = np.loadtxt(csvpath, delimiter=" ", dtype=str).tolist()
return [get_filename(pth, split_symbol) for pth in path_list]
water_image_names = read_csv(
"/kaggle/input/ecti2021/water_tiles.csv"
) # from lab01 make sure the path is correct
background_image_names = read_csv("/kaggle/input/ecti2021/background_tiles.csv")
region_name_dates0 = ["_".join(n.split("_")[:2]) for n in water_image_names]
region_name_dates1 = ["_".join(n.split("_")[:2]) for n in background_image_names]
vv_image_paths, vh_image_paths, flood_label_paths, water_body_label_paths = (
[],
[],
[],
[],
)
water_image_paths, background_image_paths = [], []
for i in range(len(water_image_names)):
vv_image_path = os.path.join(
train_dir, region_name_dates0[i], "tiles", "vv", water_image_names[i]
)
vv_image_paths.append(vv_image_path)
water_image_paths.append(vv_image_path)
# get vh image path
vh_image_name = water_image_names[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates0[i], "tiles", "vh", vh_image_name
)
vh_image_paths.append(vh_image_path)
# get flood mask path
flood_image_name = water_image_names[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates0[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths.append(flood_label_path)
# get water body mask path
water_body_label_name = water_image_names[i].replace("_vv", "")
water_body_label_path = os.path.join(
train_dir,
region_name_dates0[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths.append(water_body_label_path)
for i in range(len(background_image_names)):
vv_image_path = os.path.join(
train_dir, region_name_dates1[i], "tiles", "vv", background_image_names[i]
)
vv_image_paths.append(vv_image_path)
background_image_paths.append(vv_image_path)
# get vh image path
vh_image_name = background_image_names[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates1[i], "tiles", "vh", vh_image_name
)
vh_image_paths.append(vh_image_path)
# get flood mask path
flood_image_name = background_image_names[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates1[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths.append(flood_label_path)
# get water body mask path
water_body_label_name = background_image_names[i].replace("_vv", "")
water_body_label_path = os.path.join(
train_dir,
region_name_dates1[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths.append(water_body_label_path)
water_image_names[0]
# Shuffle the index and then split in train and validation
n = len(vv_image_paths) # number of images in the dataset
arr = np.arange(n) # array 0...n-1
np.random.shuffle(arr) # shuffle it
train_idx = arr[0 : int(np.round(0.80 * n))] # 80% train
valid_idx = arr[int(np.round(0.80 * n)) :] # 20% validation
print("Number of tiles in training set:", train_idx.size)
print("Number of tiles in validation set:", valid_idx.size)
print(
"Number of tiles in the training and validation set:",
train_idx.size + valid_idx.size,
)
vv_image_paths_train = list(np.array(vv_image_paths)[train_idx])
vh_image_paths_train = list(np.array(vh_image_paths)[train_idx])
flood_label_paths_train = list(np.array(flood_label_paths)[train_idx])
water_body_label_paths_train = list(np.array(water_body_label_paths)[train_idx])
train_paths = {
"vv_image_path": vv_image_paths_train,
"vh_image_path": vh_image_paths_train,
"flood_label_path": flood_label_paths_train,
"water_body_label_path": water_body_label_paths_train,
}
train_df = pd.DataFrame(train_paths)
print(train_df.shape)
train_df.head()
vv_image_paths_valid = list(np.array(vv_image_paths)[valid_idx])
vh_image_paths_valid = list(np.array(vh_image_paths)[valid_idx])
flood_label_paths_valid = list(np.array(flood_label_paths)[valid_idx])
water_body_label_paths_valid = list(np.array(water_body_label_paths)[valid_idx])
valid_paths = {
"vv_image_path": vv_image_paths_valid,
"vh_image_path": vh_image_paths_valid,
"flood_label_path": flood_label_paths_valid,
"water_body_label_path": water_body_label_paths_valid,
}
valid_df = pd.DataFrame(valid_paths)
print(valid_df.shape)
valid_df.head()
# ## # Step 2b: Create training undersampled dataframes
background_image_paths_train = [
path for path in background_image_paths if path in vv_image_paths_train
]
background_num_train = len(background_image_paths_train)
print("Number of background tiles included in training:", background_num_train)
water_image_paths_train = [
path for path in water_image_paths if path in vv_image_paths_train
]
water_image_names_train = [get_filename(pth) for pth in water_image_paths_train]
region_name_dates2 = ["_".join(n.split("_")[:2]) for n in water_image_names_train]
water_num_train = len(water_image_paths_train)
print("Number of water tiles included in training:", water_num_train)
num_samples = int(water_num_train * 0.15) # include 15% of water tiles
arr = np.arange(int(water_num_train * 0.15)) # array 0...n-1
np.random.shuffle(arr) # shuffle it
background_image_paths_train_undersampled = list(
np.array(background_image_paths_train)[arr[0:num_samples]]
)
background_image_names_train_undersampled = [
get_filename(pth) for pth in background_image_paths_train_undersampled
]
print(
"Number of background tiles included in training after undersampling:",
len(background_image_names_train_undersampled),
)
region_name_dates3 = [
"_".join(n.split("_")[:2]) for n in background_image_names_train_undersampled
]
(
vh_image_paths_train_undersampled,
flood_label_paths_train_undersampled,
water_body_label_paths_train_undersampled,
) = ([], [], [])
for i in range(len(water_image_names_train)):
# get vh image path
vh_image_name = water_image_names_train[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates2[i], "tiles", "vh", vh_image_name
)
vh_image_paths_train_undersampled.append(vh_image_path)
# get flood mask path
flood_image_name = water_image_names_train[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates2[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths_train_undersampled.append(flood_label_path)
# get water body mask path
water_body_label_name = water_image_names_train[i].replace("_vv", "")
water_body_label_path = os.path.join(
train_dir,
region_name_dates2[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths_train_undersampled.append(water_body_label_path)
vv_image_paths_train_undersampled = water_image_paths_train
print(
"Number of water body label included in training after undersampling:",
len(water_body_label_paths_train_undersampled),
)
for i in range(len(background_image_names_train_undersampled)):
vv_image_paths_train_undersampled.append(
background_image_paths_train_undersampled[i]
)
# get vh image path
vh_image_name = background_image_names_train_undersampled[i].replace("vv", "vh")
vh_image_path = os.path.join(
train_dir, region_name_dates3[i], "tiles", "vh", vh_image_name
)
vh_image_paths_train_undersampled.append(vh_image_path)
# get flood mask path
flood_image_name = background_image_names_train_undersampled[i].replace("_vv", "")
flood_label_path = os.path.join(
train_dir, region_name_dates3[i], "tiles", "flood_label", flood_image_name
)
flood_label_paths_train_undersampled.append(flood_label_path)
# get water body mask path
water_body_label_name = background_image_names_train_undersampled[i].replace(
"_vv", ""
)
water_body_label_path = os.path.join(
train_dir,
region_name_dates3[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths_train_undersampled.append(water_body_label_path)
assert (
len(vv_image_paths_train_undersampled)
== len(vh_image_paths_train_undersampled)
== len(flood_label_paths_train_undersampled)
== len(water_body_label_paths_train_undersampled)
)
print(
"Number of overall images included in training after undersampling:",
len(water_body_label_paths_train_undersampled),
)
# **1) Based on the consideration done in Lab1 and the above calculation, explain the original dataset is in term of class imbalance and how this changed in the undersample dataset.**
# **ANSWER HERE:**
# In Lab1, we had obtained an imbalanced class distribution. At the pixel level, 98% corresponded to background pixels. At the tile level, even after excluding blank images, the background tiles were still nearly 62% of the total.
# Here, in the undersample dataset, this class imbalance problem is addressed by including only a subset of the available background tiles. Particularly, this snipped of code: `num_samples = int(water_num_train * 0.15)`, computes the number of background tiles to include in the undersampled training set, setting it to 15% of the total number of water tiles in the original training set. That number is then selected from a shuffled array. On the other hand, the number of water tiles/water body labels stays the same after undersampling (8323). The code below provides a visualization similar to the ones used in Lab1, showing the new class distribution.
# Define the class labels and counts
classes = ["Water Tiles", "Background Tiles"]
counts = [
np.size(water_body_label_paths_train_undersampled),
np.size(background_image_names_train_undersampled),
]
# Create a bar chart
plt.bar(classes, counts)
total = sum(counts)
percentages = [(count / total) * 100 for count in counts]
# Add a title and axis labels
plt.title("Class Distribution at the tile level after undersampling")
plt.xlabel("Class")
plt.ylabel("Count")
for i, count in enumerate(counts):
percentage = (count / total) * 100
plt.text(i, count + 10, f"{percentage:.1f}%", ha="center")
# Show the plot
plt.show()
train_paths_undersample = {
"vv_image_path": vv_image_paths_train_undersampled,
"vh_image_path": vh_image_paths_train_undersampled,
"flood_label_path": flood_label_paths_train_undersampled,
"water_body_label_path": water_body_label_paths_train_undersampled,
}
train_df_undersample = pd.DataFrame(train_paths_undersample)
print(train_df_undersample.shape)
train_df_undersample.head()
# # Step 3: Visualize images
# This section is meant to be used to experiment the data. Feel free to change things and to explore the data.
train_df
cv2.imread(train_df_undersample.iloc[1200]["vv_image_path"], 0)
train_df_undersample.iloc[3600]["vv_image_path"]
visualize(train_df_undersample.iloc[3600])
visualize(train_df.iloc[3677])
# # Step 4: Setup the dataset for machine learning
# Since the Phase 1 (Development phase) of the ETCI 2021 Competition on Flood Detection provided with training data (which includes reference data) and a validation data (without reference data until phase 1 concludes),we will split our training dataset (that contains flood masks) into a smaller training and development set.
# ### Create a PyTorch dataset
# We will be using the PyTorch deep learning library to format this dataset and create our machine learning model. Therefore we will need to create a custom Dataset class and pass it into a DataLoader object (see the [PyTorch Dataset Tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) for more detail on the topic). To compute image transformations we will use the [Albumentations](https://github.com/albumentations-team/albumentations) package.
class ETCIDataset(Dataset):
def __init__(self, dataframe, split, transform=None):
self.split = split
self.dataset = dataframe
self.transform = transform
def __len__(self):
return self.dataset.shape[0]
def __getitem__(self, index):
example = {}
df_row = self.dataset.iloc[index]
# load vv and vh images
vv_image = cv2.imread(df_row["vv_image_path"], 0) / 255.0
vh_image = cv2.imread(df_row["vh_image_path"], 0) / 255.0
# convert vv and vh images to rgb
rgb_image = s1_to_rgb(vv_image, vh_image)
if self.split == "test":
# no flood mask should be available
example["image"] = rgb_image.transpose((2, 0, 1)).astype(
"float32"
) # HWC->CHW
else:
# load ground truth flood mask
flood_mask = cv2.imread(df_row["flood_label_path"], 0) / 255.0
# compute transformations
if self.transform:
augmented = self.transform(image=rgb_image, mask=flood_mask)
rgb_image = augmented["image"]
flood_mask = augmented["mask"]
example["image"] = rgb_image.transpose((2, 0, 1)).astype(
"float32"
) # HWC->CHW
example["mask"] = flood_mask.astype("int64")
return example
# **2) Check the [Albumentations](https://github.com/albumentations-team/albumentations) package and implement both Vertical and Horizontal flip with probability 0.5 and RandomResizedCrop of 256 on both dimentions.**
import albumentations as A
### BEGINNING OF THE CODE
transform = A.Compose(
[
A.RandomResizedCrop(height=256, width=256),
A.OneOf([A.HorizontalFlip(p=1), A.VerticalFlip(p=1)], p=0.5),
]
)
# Apply the transforms to an image
image = cv2.imread(train_df_undersample.iloc[3600]["vv_image_path"])
transformed = transform(image=image)
transformed_image = transformed["image"]
# Display original and transformed images side by side
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
ax[0].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
ax[0].set_title("Original")
ax[1].imshow(cv2.cvtColor(transformed_image, cv2.COLOR_BGR2RGB))
ax[1].set_title("Transformed")
plt.show()
####END OF THE CODE
train_dataset = ETCIDataset(train_df, split="train", transform=transform)
valid_dataset = ETCIDataset(valid_df, split="valid", transform=None)
print("Trainining set size:", len(train_dataset))
print("Validation set size:", len(valid_dataset))
batch_size = 64
train_loader = DataLoader(
train_dataset, batch_size=batch_size, shuffle=True, num_workers=2, pin_memory=True
)
valid_loader = DataLoader(
valid_dataset, batch_size=batch_size, shuffle=False, num_workers=2, pin_memory=True
)
train_undersampled_dataset = ETCIDataset(
train_df_undersample, split="train", transform=transform
)
train_undersampled_loader = DataLoader(
train_undersampled_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=2,
pin_memory=True,
)
print("Undersampled Trainining set size:", len(train_undersampled_dataset))
# # Step 5: Deep learning model creation
# ### Select hardware to train model
device = "cuda"
# **3) We will grab a segmentation model from the [Segmentation Models](https://github.com/qubvel/segmentation_models.pytorch) package ([documentation here](https://smp.readthedocs.io/en/latest/)). Read carfully the documentation and implement a UNet with resnet34 as encoder, without any pre-trained weights, and the appropriate number of in_channel and classes based on the dataset.**
import segmentation_models_pytorch as smp
def create_model(in_channels=3, num_classes=2):
model = smp.Unet(
encoder_name="resnet34",
encoder_weights=None,
in_channels=in_channels, # Because we are working with RBG images, we set in_channels=3
classes=num_classes,
)
return model
model_1 = create_model()
model_1.to(device) # load model into GPU memory
# ### Metric tracker
from sklearn.metrics import confusion_matrix
class EvalTracker:
def __init__(self, n_classes=2, smooth=0.0001):
self.n_classes = n_classes
self.reset()
self.smooth = smooth
def reset(self):
self.cm = np.zeros((self.n_classes, self.n_classes))
self.count = 0
def update(self, pred, target):
# pred: [B, 2, H, W]
# target: [B, H, W]
self.count += pred.shape[0]
# reshape inputs
pred = pred.argmax(dim=1).flatten() # [B*H*W]
target = target.flatten() # [B*H*W]
# put into cpu memory
pred = pred.detach().cpu().numpy()
target = target.detach().cpu().numpy()
# compute confusion matrix values
self.cm += confusion_matrix(target, pred)
def get_mean(self):
tn, fp, fn, tp = self.cm.ravel()
# compute IoU
iou = tp / (tp + fp + fn + self.smooth)
prec = tp / (tp + fp + self.smooth)
rec = tp / (tp + fn + self.smooth)
f1 = 2.0 * prec * rec / (prec + rec)
return iou, prec, rec, f1
# # Step 6: Train model on the full dataset
# ### Model config, optimizer and loss function
# set the number of times you want the model to see all of the training data
epochs = 10
learning_rate = 1e-4
optimizer = torch.optim.Adam(model_1.parameters(), lr=learning_rate)
criteria_no_weights = nn.CrossEntropyLoss(weight=None)
# ### Training loop
train_loss_dict = {}
val_loss_dict = {}
for epoch in range(epochs):
print("Epoch: [{}/{}]".format(epoch + 1, epochs))
# train set
pbar = tqdm(train_loader)
train_loss = 0.0
model_1.train()
eval_logger = EvalTracker()
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_1(image)
# get loss
loss = criteria_no_weights(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
train_loss += loss.item() * image.size(0)
# calculate the average loss for both training and validation
train_loss /= len(train_loader.dataset)
train_loss_dict[epoch] = train_loss
# valid set
pbar = tqdm(valid_loader)
model_1.eval()
eval_logger = EvalTracker()
val_loss = 0.0
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_1(image)
# get loss
loss = criteria_no_weights(pred, mask)
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
val_loss += loss.item() * image.size(0)
val_loss /= len(valid_loader.dataset)
val_loss_dict[epoch] = val_loss
# Save the training loss values
with open("./train_loss_1_1_BCE.pkl", "wb") as file:
pickle.dump(train_loss_dict, file)
# Save the validation loss values
with open("./val_loss_1_1_BCE.pkl", "wb") as file:
pickle.dump(val_loss_dict, file)
# save model
torch.save(model_1.state_dict(), "model_1_BCE.pt")
# ### Plot Losses
# Load the training and validation loss dictionaries
train_loss = load(open("/kaggle/working/train_loss_1_1_BCE.pkl", "rb"))
val_loss = load(open("/kaggle/working/val_loss_1_1_BCE.pkl", "rb"))
# Retrieve each dictionary's values
train_values = train_loss.values()
val_values = val_loss.values()
# Generate a sequence of integers to represent the epoch numbers
epochs_range = range(1, epochs + 1)
# Plot and label the training and validation loss values
plt.plot(epochs_range, train_values, label="Training Loss")
plt.plot(epochs_range, val_values, label="Validation Loss")
# Add in a title and axes labels
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
# Set the tick locations
plt.xticks(np.arange(0, epochs + 1, 2))
# Display the plot
plt.legend(loc="best")
plt.show()
# # Step 7: Train model on the undersampled dataset
# ### Model config, optimizer and loss function
batch_size = 64
epochs = 10
learning_rate = 1e-4
model_2 = create_model()
model_2.to(device)
optimizer = torch.optim.Adam(model_2.parameters(), lr=learning_rate)
criteria_no_weights = nn.CrossEntropyLoss(weight=None)
# **4) Implement a training loop similar to the one above but for the undersampled dataset. Use model_2 to avoid any overwriting of the previous model. Save the model as 'model_2d_BCE.pt'***
# ### Training loop
### CODE HERE###
train_loss_dict_2 = {}
val_loss_dict_2 = {}
for epoch in range(epochs):
print("Epoch: [{}/{}]".format(epoch + 1, epochs))
# train set
pbar = tqdm(train_undersampled_loader)
train_loss = 0.0
model_2.train()
eval_logger = EvalTracker()
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_2(image)
# get loss
loss = criteria_no_weights(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
train_loss += loss.item() * image.size(0)
# calculate the average loss for both training and validation
train_loss /= len(train_undersampled_loader.dataset)
train_loss_dict_2[epoch] = train_loss
# valid set
pbar = tqdm(valid_loader)
model_2.eval()
eval_logger = EvalTracker()
val_loss = 0.0
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_2(image)
# get loss
loss = criteria_no_weights(pred, mask)
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
val_loss += loss.item() * image.size(0)
val_loss /= len(valid_loader.dataset)
val_loss_dict_2[epoch] = val_loss
# Save the training loss values
with open("./train_loss_2_BCE.pkl", "wb") as file:
pickle.dump(train_loss_dict_2, file)
# Save the validation loss values
with open("./val_loss_2_BCE.pkl", "wb") as file:
pickle.dump(val_loss_dict_2, file)
# save model
torch.save(model_2.state_dict(), "model_2_BCE.pt")
train_path = r"train_df.csv"
valid_path = r"valid_df.csv"
train_under_path = r"train_df_undersample.csv"
train_df.to_csv(train_path)
valid_df.to_csv(valid_path)
train_df_undersample.to_csv(train_under_path)
# ### Plot Losses
# Load the training and validation loss dictionaries
train_loss = load(open("/kaggle/working/train_loss_2_BCE.pkl", "rb"))
val_loss = load(open("/kaggle/working/val_loss_2_BCE.pkl", "rb"))
# Retrieve each dictionary's values
train_values = train_loss.values()
val_values = val_loss.values()
# Generate a sequence of integers to represent the epoch numbers
epochs_range = range(1, epochs + 1)
# Plot and label the training and validation loss values
plt.plot(epochs_range, train_values, label="Training Loss")
plt.plot(epochs_range, val_values, label="Validation Loss")
# Add in a title and axes labels
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
# Set the tick locations
plt.xticks(np.arange(0, epochs + 1, 2))
# Display the plot
plt.legend(loc="best")
plt.show()
# # Step 8: Train model on the undersampled dataset with a weighted loss function
# ### Defining the split for the weighted Cross Entropy Loss
# It take quite a long time to calcualte, the ratio is around 5% flooded pixels, 95% background
n_size = len(flood_label_paths_train_undersampled)
n_flooded = np.ndarray(
(n_size,),
)
for i in tqdm(range(n_size)):
flood_label = cv2.imread(flood_label_paths_train_undersampled[i], 0)
n_flooded[i] = np.sum(flood_label) / 255
n_flooded_ratio = np.divide(n_flooded, 256 * 256)
flooded_pixels = np.sum(n_flooded)
background_pixels = 256 * 256 * n_size - np.sum(n_flooded)
print("Flooded Pixels:", flooded_pixels)
print("Background Pixels:", background_pixels)
print("Ratio:", np.mean(n_flooded_ratio))
# ### Model config, optimizer and loss function
# **5) Define the "Model config, optimizer and loss function" section as previously done but for "model_3" which will be trained with a [Weighted Cross Entropy Loss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html). Remember to store the weights as a torch tensor, to load it in the GPU, and be careful on the order of your weights.**
###CODE HERE
# Define the model configuration
model_3 = create_model()
model_3.to(device)
learning_rate = 1e-4
optimizer = torch.optim.Adam(model_3.parameters(), lr=learning_rate)
# define loss function with weights
weights = torch.tensor([0.0462, (1 - 0.0462)])
weights = weights.to(device)
criteria_weights = nn.CrossEntropyLoss(weight=weights)
# **6) Why did you choose the weights you used for the CrossEntropyLoss?**
# **ANSWER HERE:**
# The choice of class weights for the weighted cross-entropy loss is in this case a strategy to tackle imbalance issues in the dataset. I set the weight for the background to 0.05 and for the flooded pixels to 0.95, thus reversing the original relationship of the distribution. This means that we are giving more weight and importance to the class we are interested in, as it is is more important in the segmentation task than the first class.
# ### Training Loop
train_loss_dict = {}
val_loss_dict = {}
for epoch in range(epochs):
print("Epoch: [{}/{}]".format(epoch + 1, epochs))
# train set
pbar = tqdm(train_undersampled_loader)
train_loss = 0.0
model_3.train()
eval_logger = EvalTracker()
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_3(image)
# get loss
loss = criteria_weights(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
train_loss += loss.item() * image.size(0)
# calculate the average loss for both training and validation
train_loss /= len(train_undersampled_loader.dataset)
train_loss_dict[epoch] = train_loss
# valid set
pbar = tqdm(valid_loader)
val_loss = 0.0
model_3.eval()
eval_logger = EvalTracker()
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch["image"].to(device)
mask = batch["mask"].to(device)
# pass images into model
pred = model_3(image)
# get loss
loss = criteria_weights(pred, mask)
# compute and display progress
eval_logger.update(pred, mask)
mIoU, Prec, Rec, f1 = eval_logger.get_mean()
pbar.set_description(
"Loss: {0:1.4f} | mIoU {1:1.4f} | Prec {2:1.4f} | Rec {3:1.4f} | F1 score {4:1.4f}".format(
loss.item(), mIoU, Prec, Rec, f1
)
)
val_loss += loss.item() * image.size(0)
val_loss /= len(valid_loader.dataset)
val_loss_dict[epoch] = val_loss
# Save the training loss values
with open("./train_loss_2d_WBCE.pkl", "wb") as file:
pickle.dump(train_loss_dict, file)
# Save the validation loss values
with open("./val_loss_2d_WBCE.pkl", "wb") as file:
pickle.dump(val_loss_dict, file)
# save model
torch.save(model_3.state_dict(), "model_2d_WBCE.pt")
# ### Plot Losses
from numpy import *
from pickle import load
# Load the training and validation loss dictionaries
train_loss = load(open("/kaggle/working/train_loss_2d_WBCE.pkl", "rb"))
val_loss = load(open("/kaggle/working/val_loss_2d_WBCE.pkl", "rb"))
# Retrieve each dictionary's values
train_values = train_loss.values()
val_values = val_loss.values()
# Generate a sequence of integers to represent the epoch numbers
epochs_range = range(1, epochs + 1)
# Plot and label the training and validation loss values
plt.plot(epochs_range, train_values, label="Training Loss")
plt.plot(epochs_range, val_values, label="Validation Loss")
# Add in a title and axes labels
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
# Set the tick locations
plt.xticks(np.arange(0, epochs + 1, 2))
# Display the plot
plt.legend(loc="best")
plt.show()
# **7) How are the three models (model_1_BCE.pt, model_2d_BCE.pt, and model_2d_WBCE.pt) performning? Comment the performances of the models.**
# **ANSWER HERE:**
# We can analyse and compare the performance of the models based on the performance metrics of the last Epoch. The table below summarizes the results.
# | Model | Loss | MIoU | Precision | Recall | F1 score |
# |---------------------------------------------------- |-------- |-------- |----------- |-------- |---------- |
# | Full sample | 0.0054 | 0.5933 | 0.8283 | 0.6766 | 0.7448 |
# | Undersampled dataset | 0.0182 | 0.5958 | 0.8036 | 0.6973 | 0.7467 |
# | Undersampled dataset with a weighted loss function | 0.4686 | 0.0838 | 0.0842 | 0.9497 | 0.1547 |
# Based on the given metrics, it appears that the first two models (Full sample and Undersampled dataset) perform similarly and better than the third model (Undersampled dataset with a weighted loss function). The first two models have similar values for MIoU, Precision, Recall, and F1 score, with the Undersampled dataset model having slightly higher MIoU and Recall but slightly lower Precision and F1 score. The third model has much lower values for Loss, MIoU, Precision, and F1 score but a much higher value for Recall.
# In general, which model is better depends on the specific problem. For example, if high precision is important, then the Full sample model may be preferred. If high recall is important, then the Undersampled dataset model may be preferred. In our exercise, there is a clear tradeoff. Overlooking false positives (that is, considering an area is flooded when in fact it is not) might incur in inflating the costs to be faced by policymakers. On the other hand, overlooking false negatives (that is, ignoring some areas that are flooded and treat them as if they are not) might have negative effects from a humans right approach, as the cost to get help to flooded areas could be underestimated, and the strategy to cover those areas, get adequate aid, etc., might be suboptimal. In this context, we might want to give more importance to the **recall** metric: $\frac{TP}{(TP+FN)}$.
# Even though the weighted loss function model has the highest recall, the rest of the metrics perform very poorly, which makes it a problematic choice. In that sense, the model with the undersampled dataset without weights might be a better choice. I will use this model in Step 10.
# # Step 10: Test models
# ### Create a test dataset
# Let's create Dataset and DataLoader objects for the validation set. This time we will not have labels.
vv_image_paths = sorted(glob(test_dir + "/**/vv/*.png", recursive=True))
vv_image_names = [get_filename(pth) for pth in vv_image_paths]
region_name_dates = ["_".join(n.split("_")[:2]) for n in vv_image_names]
vh_image_paths, flood_label_paths, water_body_label_paths, region_names = [], [], [], []
for i in range(len(vv_image_paths)):
# get vh image path
vh_image_name = vv_image_names[i].replace("vv", "vh")
vh_image_path = os.path.join(
test_dir, region_name_dates[i], "tiles", "vh", vh_image_name
)
vh_image_paths.append(vh_image_path)
# get flood mask path ()
flood_label_paths.append(np.NaN)
# get water body mask path
water_body_label_name = vv_image_names[i].replace("_vv", "")
water_body_label_path = os.path.join(
test_dir,
region_name_dates[i],
"tiles",
"water_body_label",
water_body_label_name,
)
water_body_label_paths.append(water_body_label_path)
# get region name
region_name = region_name_dates[i].split("_")[0]
region_names.append(region_name)
test_paths = {
"vv_image_path": vv_image_paths,
"vh_image_path": vh_image_paths,
"flood_label_path": flood_label_paths,
"water_body_label_path": water_body_label_paths,
"region": region_names,
}
test_df = pd.DataFrame(valid_paths)
print(test_df.shape)
test_df.head()
# ### Run inference
# **8) Choose your best performing model from Steps 6-9 and run inference below:**
# load model
model_test = create_model()
model_test.load_state_dict(
torch.load("/kaggle/working/model_2_BCE.pt")
) # CHANGE THE MODEL HERE. Default set as model_1_BCE.pt
model_test.to(device)
test_dataset = ETCIDataset(test_df, split="test", transform=None)
test_loader = DataLoader(
test_dataset, batch_size=batch_size, shuffle=False, num_workers=2, pin_memory=True
) # make sure shuffle is False
final_predictions = []
model_test.eval()
with torch.no_grad():
for batch in tqdm(test_loader):
# load image and mask into device memory
image = batch["image"].to(device)
# pass images into model
pred = model_test(image)
# compute class predictions, i.e. flood or no-flood
class_pred = pred.argmax(dim=1)
# convert class prediction to numpy
class_pred = class_pred.detach().cpu().numpy()
# add to final predictions
final_predictions.append(class_pred.astype("uint8"))
final_predictions = np.concatenate(final_predictions, axis=0)
# check final prediction shape
print(final_predictions.shape)
# ### Visualize some results
index = 252
visualize_result(valid_df.iloc[index], final_predictions[index], figsize=(17, 10))
index = -100
visualize_result(valid_df.iloc[index], final_predictions[index], figsize=(17, 10))
index = 1910
visualize_result(valid_df.iloc[index], final_predictions[index], figsize=(17, 10))
| false | 0 | 12,845 | 0 | 12,871 | 12,845 |
||
129073612
|
<jupyter_start><jupyter_text>Consumer Reviews of Amazon Products
# About This Data
This is a list of over 34,000 consumer reviews for Amazon products like the Kindle, Fire TV Stick, and more provided by [Datafiniti's Product Database][1]. The dataset includes basic product information, rating, review text, and more for each product.
*Note that this is a sample of a large dataset. The full dataset is available through Datafiniti.*
# What You Can Do With This Data
You can use this data to [analyze Amazon’s most successful consumer electronics product launches][2]; discover insights into consumer reviews and assist with machine learning models. E.g.:
* What are the most reviewed Amazon products?
* What are the initial and current number of customer reviews for each product?
* How do the reviews in the first 90 days after a product launch compare to the price of the product?
* How do the reviews in the first 90 days after a product launch compare to the days available for sale?
* Map the keywords in the review text against the review ratings to help train sentiment models.
# Data Schema
A full schema for the data is available in our [support documentation][3].
# About Datafiniti
Datafiniti provides instant access to web data. We compile data from thousands of websites to create standardized databases of business, product, and property information. [Learn more][4].
# Interested in the Full Dataset?
You can access the full dataset by running the following query with [Datafiniti’s Product API][5].
`{
"query": "dateUpdated:[2017-09-01 TO *] AND brand:Amazon* AND categories:* AND primaryCategories:* AND name:* AND reviews:*", "format": "csv", "download": true
}`
**The total number of results may vary.*
Get this data and more by [creating a free Datafiniti account][6] or [requesting a demo][7].
[1]: https://datafiniti.co/products/product-data/
[2]: https://datafiniti.co/amazon-fire-stick-juggernaut/
[3]: https://datafiniti-api.readme.io/docs/product-data-schema
[4]: https://datafiniti.co
[5]: https://developer.datafiniti.co/docs/getting-started-with-product-data
[6]: https://datafiniti.co/pricing/product-data-pricing/
[7]: https://datafiniti.co/request-a-demo/
Kaggle dataset identifier: consumer-reviews-of-amazon-products
<jupyter_script>#
# # Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import math
import warnings
warnings.filterwarnings("ignore") # Hides warning
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=UserWarning)
sns.set_style("whitegrid") # Plotting style
np.random.seed(7) # seeding random number generato
csv = "/kaggle/input/consumer-reviews-of-amazon-products/1429_1.csv"
df = pd.read_csv(csv)
df.head(2)
data = df.copy()
data.describe()
data.info()
data["asins"].unique()
asins_unique = len(data["asins"].unique())
print("Number of Unique ASINs: " + str(asins_unique))
# Builds histogram and set the number of bins and fig size (width, height)
data.hist(bins=50, figsize=(20, 15))
plt.show()
# # Distribution of rating
#
# distribution of rating
import matplotlib.pyplot as plt
import seaborn as sns
sns.countplot(data["reviews.rating"])
plt.xlabel("Rating Count")
# # Distribution of sentiment
#
# map ratings 1, 2, 3 to 0 (NEGATIVE) and 4, 5 to 1 (POSITIVE)
sentiment_score = {1: 0, 2: 0, 3: 0, 4: 1, 5: 1}
sentiment = {0: "NEGATIVE", 1: "POSITIVE"}
# mapping
data["sentiment_score"] = data["reviews.rating"].map(sentiment_score)
data["sentiment"] = data["sentiment_score"].map(sentiment)
data.head()
# distribution of sentiment
plt.figure(figsize=(8, 8))
labels = ["POSITIVE", "NEGATIVE"]
colors = ["#189AB4", "#D4F1F4"]
plt.pie(data["sentiment"].value_counts(), autopct="%0.2f%%", colors=colors)
plt.title("Distribution of sentiment", size=14, y=-0.01)
plt.legend(labels, ncol=2, loc=9)
plt.show()
# # All Reviews wordclod
# get all used words
# = pd.Series(' '.join(data['reviews.text']).split())
all_words = pd.Series(" ".join(str(data["reviews.text"]).split()))
# plot word cloud
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
wordcloud = WordCloud(width=1000, height=500).generate(" ".join(all_words))
plt.figure(figsize=(15, 8))
plt.imshow(wordcloud)
plt.title("Most used words in all reviws", size=16)
plt.axis("off")
plt.show()
from sklearn.model_selection import StratifiedShuffleSplit
print("Before {}".format(len(data)))
dataAfter = data.dropna(subset=["reviews.rating"])
# Removes all NAN in reviews.rating
print("After {}".format(len(dataAfter)))
dataAfter["reviews.rating"] = dataAfter["reviews.rating"].astype(int)
split = StratifiedShuffleSplit(n_splits=5, test_size=0.2)
for train_index, test_index in split.split(dataAfter, dataAfter["reviews.rating"]):
strat_train = dataAfter.reindex(train_index)
strat_test = dataAfter.reindex(test_index)
len(strat_train), len(strat_test)
strat_train["reviews.rating"].value_counts() / len(strat_train)
strat_test["reviews.rating"].value_counts() / len(strat_test)
# # Data Exploration
reviews = strat_train.copy()
reviews.head(2)
len(reviews["name"].unique()), len(reviews["asins"].unique())
reviews.info()
reviews.groupby("asins")["name"].unique()
# Lets see all the different names for this product that have 2 ASINs
different_names = reviews[reviews["asins"] == "B00L9EPT8O,B01E6AO69U"]["name"].unique()
for name in different_names:
print(name)
fig = plt.figure(figsize=(16, 10))
ax1 = plt.subplot(211)
ax2 = plt.subplot(212, sharex=ax1)
reviews["asins"].value_counts().plot(kind="bar", ax=ax1, title="ASIN Frequency")
np.log10(reviews["asins"].value_counts()).plot(
kind="bar", ax=ax2, title="ASIN Frequency (Log10 Adjusted)"
)
plt.show()
# Entire training dataset average rating
reviews["reviews.rating"].mean()
# # Reviews.rating / ASINs
asins_count_ix = reviews["asins"].value_counts().index
plt.subplots(2, 1, figsize=(16, 12))
plt.subplot(2, 1, 1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2, 1, 2)
sns.pointplot(x="asins", y="reviews.rating", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
# # Reviews.doRecommend/ASINs
asins_count_ix = reviews["asins"].value_counts().index
plt.subplots(2, 1, figsize=(16, 12))
plt.subplot(2, 1, 1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2, 1, 2)
sns.pointplot(x="asins", y="reviews.rating", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
plt.subplots(2, 1, figsize=(16, 12))
plt.subplot(2, 1, 1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2, 1, 2)
sns.pointplot(x="asins", y="reviews.doRecommend", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
# # Correlations
corr_matrix = reviews.corr()
corr_matrix
# Here we can analyze reviews.ratings with asins
counts = reviews["asins"].value_counts().to_frame()
counts.head()
avg_rating = reviews.groupby("asins")["reviews.rating"].mean().to_frame()
avg_rating.head()
table = counts.join(avg_rating)
table.head(30)
plt.scatter("asins", "reviews.rating", data=table)
table.corr()
# # Sentiment Analysis
def sentiments(rating):
if (rating == 5) or (rating == 4):
return "Positive"
elif rating == 3:
return "Neutral"
elif (rating == 2) or (rating == 1):
return "Negative"
# Add sentiments to the data
strat_train["Sentiment"] = strat_train["reviews.rating"].apply(sentiments)
strat_test["Sentiment"] = strat_test["reviews.rating"].apply(sentiments)
strat_train["Sentiment"][:20]
# # Prepare data
#
X_train = strat_train["reviews.text"]
X_train_targetSentiment = strat_train["Sentiment"]
X_test = strat_test["reviews.text"]
X_test_targetSentiment = strat_test["Sentiment"]
print(len(X_train), len(X_test))
# # Feature Extraction
# Replace "nan" with space
X_train = X_train.fillna(" ")
X_test = X_test.fillna(" ")
X_train_targetSentiment = X_train_targetSentiment.fillna(" ")
X_test_targetSentiment = X_test_targetSentiment.fillna(" ")
# Text preprocessing and occurance counting
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer(use_idf=False)
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
# # Building a Pipeline from the Extracted Features
# # Multinominal Naive Bayes
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
clf_multiNB_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_nominalNB", MultinomialNB()),
]
)
clf_multiNB_pipe.fit(X_train, X_train_targetSentiment)
import numpy as np
predictedMultiNB = clf_multiNB_pipe.predict(X_test)
mnb = np.mean(predictedMultiNB == X_test_targetSentiment)
mnb * 100
# # Logistic Regression Classifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
clf_logReg_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_logReg", LogisticRegression()),
]
)
clf_logReg_pipe.fit(X_train, X_train_targetSentiment)
import numpy as np
predictedLogReg = clf_logReg_pipe.predict(X_test)
lrc = np.mean(predictedLogReg == X_test_targetSentiment)
lrc * 100
# # Support Vector Machine Classifier
from sklearn.svm import LinearSVC
clf_linearSVC_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_linearSVC", LinearSVC()),
]
)
clf_linearSVC_pipe.fit(X_train, X_train_targetSentiment)
predictedLinearSVC = clf_linearSVC_pipe.predict(X_test)
svmc = np.mean(predictedLinearSVC == X_test_targetSentiment)
svmc * 100
# # Decision Tree Classifier
#
from sklearn.tree import DecisionTreeClassifier
clf_decisionTree_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_decisionTree", DecisionTreeClassifier()),
]
)
clf_decisionTree_pipe.fit(X_train, X_train_targetSentiment)
predictedDecisionTree = clf_decisionTree_pipe.predict(X_test)
dtc = np.mean(predictedDecisionTree == X_test_targetSentiment)
dtc * 100
# # Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
clf_randomForest_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_randomForest", RandomForestClassifier()),
]
)
clf_randomForest_pipe.fit(X_train, X_train_targetSentiment)
predictedRandomForest = clf_randomForest_pipe.predict(X_test)
rfc = np.mean(predictedRandomForest == X_test_targetSentiment)
rfc * 100
# # Performance Analysis of Support Vector Machine Classifier
from sklearn.model_selection import GridSearchCV
parameters = {
"vect__ngram_range": [(1, 1), (1, 2)],
"tfidf__use_idf": (True, False),
}
gs_clf_LinearSVC_pipe = GridSearchCV(clf_linearSVC_pipe, parameters, n_jobs=-1)
gs_clf_LinearSVC_pipe = gs_clf_LinearSVC_pipe.fit(X_train, X_train_targetSentiment)
# new_text = ["The tablet is good, really liked it.", # positive
# "The tablet is ok, but it works fine.", # neutral
# "The tablet is not good, does not work very well."] # negative
# X_train_targetSentiment[gs_clf_LinearSVC_pipe.predict(new_text)]
predictedGS_clf_LinearSVC_pipe = gs_clf_LinearSVC_pipe.predict(X_test)
GSLinearSVC = np.mean(predictedGS_clf_LinearSVC_pipe == X_test_targetSentiment)
GSLinearSVC * 100
for performance_analysis in (
gs_clf_LinearSVC_pipe.best_score_,
gs_clf_LinearSVC_pipe.best_estimator_,
gs_clf_LinearSVC_pipe.best_params_,
):
print(performance_analysis)
from sklearn import metrics
metrics.confusion_matrix(X_test_targetSentiment, predictedGS_clf_LinearSVC_pipe)
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
print(classification_report(X_test_targetSentiment, predictedGS_clf_LinearSVC_pipe))
print(
"Accuracy: {}".format(
accuracy_score(X_test_targetSentiment, predictedGS_clf_LinearSVC_pipe)
)
)
# # Summary
from prettytable import PrettyTable
Summary = PrettyTable(["Model Name", "Accuracy in %"])
Summary.add_row(["Multinominal Naive Bayes", "{:.2f}".format(mnb * 100)])
Summary.add_row(["Logistic Regression Classifier", "{:.2f}".format(lrc * 100)])
Summary.add_row(["Support Vector Machine Classifier", "{:.2f}".format(svmc * 100)])
Summary.add_row(["Decision Tree Classifier", "{:.2f}".format(dtc * 100)])
Summary.add_row(["Random Forest Classifier", "{:.2f}".format(rfc * 100)])
Summary.add_row(["GridSearchClf_LinearSVC", "{:.2f}".format(GSLinearSVC * 100)])
print(Summary)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/073/129073612.ipynb
|
consumer-reviews-of-amazon-products
| null |
[{"Id": 129073612, "ScriptId": 38370136, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 2871082, "CreationDate": "05/10/2023 19:14:39", "VersionNumber": 1.0, "Title": "Amazon Sentiment Analysis | Model Comparison |2019", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 352.0, "LinesInsertedFromPrevious": 352.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 9}]
|
[{"Id": 184805880, "KernelVersionId": 129073612, "SourceDatasetVersionId": 438431}]
|
[{"Id": 438431, "DatasetId": 1939, "DatasourceVersionId": 453983, "CreatorUserId": 1164459, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "05/20/2019 00:38:59", "VersionNumber": 5.0, "Title": "Consumer Reviews of Amazon Products", "Slug": "consumer-reviews-of-amazon-products", "Subtitle": "A list of over 34,000 reviews of Amazon products like the Kindle, Fire TV, etc.", "Description": "# About This Data\nThis is a list of over 34,000 consumer reviews for Amazon products like the Kindle, Fire TV Stick, and more provided by [Datafiniti's Product Database][1]. The dataset includes basic product information, rating, review text, and more for each product. \n\n*Note that this is a sample of a large dataset. The full dataset is available through Datafiniti.*\n\n# What You Can Do With This Data\nYou can use this data to [analyze Amazon\u2019s most successful consumer electronics product launches][2]; discover insights into consumer reviews and assist with machine learning models. E.g.:\n\n* What are the most reviewed Amazon products?\n* What are the initial and current number of customer reviews for each product?\n* How do the reviews in the first 90 days after a product launch compare to the price of the product?\n* How do the reviews in the first 90 days after a product launch compare to the days available for sale?\n* Map the keywords in the review text against the review ratings to help train sentiment models.\n\n# Data Schema\nA full schema for the data is available in our [support documentation][3].\n\n# About Datafiniti\nDatafiniti provides instant access to web data. We compile data from thousands of websites to create standardized databases of business, product, and property information. [Learn more][4].\n\n# Interested in the Full Dataset?\nYou can access the full dataset by running the following query with [Datafiniti\u2019s Product API][5].\n\n`{\n \"query\": \"dateUpdated:[2017-09-01 TO *] AND brand:Amazon* AND categories:* AND primaryCategories:* AND name:* AND reviews:*\", \"format\": \"csv\", \"download\": true\n}`\n\n**The total number of results may vary.*\n\nGet this data and more by [creating a free Datafiniti account][6] or [requesting a demo][7].\n\n [1]: https://datafiniti.co/products/product-data/\n [2]: https://datafiniti.co/amazon-fire-stick-juggernaut/\n [3]: https://datafiniti-api.readme.io/docs/product-data-schema\n [4]: https://datafiniti.co\n [5]: https://developer.datafiniti.co/docs/getting-started-with-product-data\n [6]: https://datafiniti.co/pricing/product-data-pricing/\n [7]: https://datafiniti.co/request-a-demo/", "VersionNotes": "This dataset is a list of over 28,000 consumer reviews for Amazon products like the Kindle, Fire TV Stick, and more from Datafiniti's Product Database updated between February 2019 and April 2019. Each product listing includes the name Amazon in the Brand and Manufacturer field. All fields within this dataset have been flattened, with some omitted, to streamline your data analysis. This version is a sample of a large dataset. The full dataset is available through Datafiniti.", "TotalCompressedBytes": 265643815.0, "TotalUncompressedBytes": 15260117.0}]
|
[{"Id": 1939, "CreatorUserId": 798407, "OwnerUserId": NaN, "OwnerOrganizationId": 221.0, "CurrentDatasetVersionId": 438431.0, "CurrentDatasourceVersionId": 453983.0, "ForumId": 5615, "Type": 2, "CreationDate": "08/14/2017 16:12:48", "LastActivityDate": "02/05/2018", "TotalViews": 385899, "TotalDownloads": 40733, "TotalVotes": 436, "TotalKernels": 71}]
| null |
#
# # Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import math
import warnings
warnings.filterwarnings("ignore") # Hides warning
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=UserWarning)
sns.set_style("whitegrid") # Plotting style
np.random.seed(7) # seeding random number generato
csv = "/kaggle/input/consumer-reviews-of-amazon-products/1429_1.csv"
df = pd.read_csv(csv)
df.head(2)
data = df.copy()
data.describe()
data.info()
data["asins"].unique()
asins_unique = len(data["asins"].unique())
print("Number of Unique ASINs: " + str(asins_unique))
# Builds histogram and set the number of bins and fig size (width, height)
data.hist(bins=50, figsize=(20, 15))
plt.show()
# # Distribution of rating
#
# distribution of rating
import matplotlib.pyplot as plt
import seaborn as sns
sns.countplot(data["reviews.rating"])
plt.xlabel("Rating Count")
# # Distribution of sentiment
#
# map ratings 1, 2, 3 to 0 (NEGATIVE) and 4, 5 to 1 (POSITIVE)
sentiment_score = {1: 0, 2: 0, 3: 0, 4: 1, 5: 1}
sentiment = {0: "NEGATIVE", 1: "POSITIVE"}
# mapping
data["sentiment_score"] = data["reviews.rating"].map(sentiment_score)
data["sentiment"] = data["sentiment_score"].map(sentiment)
data.head()
# distribution of sentiment
plt.figure(figsize=(8, 8))
labels = ["POSITIVE", "NEGATIVE"]
colors = ["#189AB4", "#D4F1F4"]
plt.pie(data["sentiment"].value_counts(), autopct="%0.2f%%", colors=colors)
plt.title("Distribution of sentiment", size=14, y=-0.01)
plt.legend(labels, ncol=2, loc=9)
plt.show()
# # All Reviews wordclod
# get all used words
# = pd.Series(' '.join(data['reviews.text']).split())
all_words = pd.Series(" ".join(str(data["reviews.text"]).split()))
# plot word cloud
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
wordcloud = WordCloud(width=1000, height=500).generate(" ".join(all_words))
plt.figure(figsize=(15, 8))
plt.imshow(wordcloud)
plt.title("Most used words in all reviws", size=16)
plt.axis("off")
plt.show()
from sklearn.model_selection import StratifiedShuffleSplit
print("Before {}".format(len(data)))
dataAfter = data.dropna(subset=["reviews.rating"])
# Removes all NAN in reviews.rating
print("After {}".format(len(dataAfter)))
dataAfter["reviews.rating"] = dataAfter["reviews.rating"].astype(int)
split = StratifiedShuffleSplit(n_splits=5, test_size=0.2)
for train_index, test_index in split.split(dataAfter, dataAfter["reviews.rating"]):
strat_train = dataAfter.reindex(train_index)
strat_test = dataAfter.reindex(test_index)
len(strat_train), len(strat_test)
strat_train["reviews.rating"].value_counts() / len(strat_train)
strat_test["reviews.rating"].value_counts() / len(strat_test)
# # Data Exploration
reviews = strat_train.copy()
reviews.head(2)
len(reviews["name"].unique()), len(reviews["asins"].unique())
reviews.info()
reviews.groupby("asins")["name"].unique()
# Lets see all the different names for this product that have 2 ASINs
different_names = reviews[reviews["asins"] == "B00L9EPT8O,B01E6AO69U"]["name"].unique()
for name in different_names:
print(name)
fig = plt.figure(figsize=(16, 10))
ax1 = plt.subplot(211)
ax2 = plt.subplot(212, sharex=ax1)
reviews["asins"].value_counts().plot(kind="bar", ax=ax1, title="ASIN Frequency")
np.log10(reviews["asins"].value_counts()).plot(
kind="bar", ax=ax2, title="ASIN Frequency (Log10 Adjusted)"
)
plt.show()
# Entire training dataset average rating
reviews["reviews.rating"].mean()
# # Reviews.rating / ASINs
asins_count_ix = reviews["asins"].value_counts().index
plt.subplots(2, 1, figsize=(16, 12))
plt.subplot(2, 1, 1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2, 1, 2)
sns.pointplot(x="asins", y="reviews.rating", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
# # Reviews.doRecommend/ASINs
asins_count_ix = reviews["asins"].value_counts().index
plt.subplots(2, 1, figsize=(16, 12))
plt.subplot(2, 1, 1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2, 1, 2)
sns.pointplot(x="asins", y="reviews.rating", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
plt.subplots(2, 1, figsize=(16, 12))
plt.subplot(2, 1, 1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2, 1, 2)
sns.pointplot(x="asins", y="reviews.doRecommend", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
# # Correlations
corr_matrix = reviews.corr()
corr_matrix
# Here we can analyze reviews.ratings with asins
counts = reviews["asins"].value_counts().to_frame()
counts.head()
avg_rating = reviews.groupby("asins")["reviews.rating"].mean().to_frame()
avg_rating.head()
table = counts.join(avg_rating)
table.head(30)
plt.scatter("asins", "reviews.rating", data=table)
table.corr()
# # Sentiment Analysis
def sentiments(rating):
if (rating == 5) or (rating == 4):
return "Positive"
elif rating == 3:
return "Neutral"
elif (rating == 2) or (rating == 1):
return "Negative"
# Add sentiments to the data
strat_train["Sentiment"] = strat_train["reviews.rating"].apply(sentiments)
strat_test["Sentiment"] = strat_test["reviews.rating"].apply(sentiments)
strat_train["Sentiment"][:20]
# # Prepare data
#
X_train = strat_train["reviews.text"]
X_train_targetSentiment = strat_train["Sentiment"]
X_test = strat_test["reviews.text"]
X_test_targetSentiment = strat_test["Sentiment"]
print(len(X_train), len(X_test))
# # Feature Extraction
# Replace "nan" with space
X_train = X_train.fillna(" ")
X_test = X_test.fillna(" ")
X_train_targetSentiment = X_train_targetSentiment.fillna(" ")
X_test_targetSentiment = X_test_targetSentiment.fillna(" ")
# Text preprocessing and occurance counting
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer(use_idf=False)
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
# # Building a Pipeline from the Extracted Features
# # Multinominal Naive Bayes
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
clf_multiNB_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_nominalNB", MultinomialNB()),
]
)
clf_multiNB_pipe.fit(X_train, X_train_targetSentiment)
import numpy as np
predictedMultiNB = clf_multiNB_pipe.predict(X_test)
mnb = np.mean(predictedMultiNB == X_test_targetSentiment)
mnb * 100
# # Logistic Regression Classifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
clf_logReg_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_logReg", LogisticRegression()),
]
)
clf_logReg_pipe.fit(X_train, X_train_targetSentiment)
import numpy as np
predictedLogReg = clf_logReg_pipe.predict(X_test)
lrc = np.mean(predictedLogReg == X_test_targetSentiment)
lrc * 100
# # Support Vector Machine Classifier
from sklearn.svm import LinearSVC
clf_linearSVC_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_linearSVC", LinearSVC()),
]
)
clf_linearSVC_pipe.fit(X_train, X_train_targetSentiment)
predictedLinearSVC = clf_linearSVC_pipe.predict(X_test)
svmc = np.mean(predictedLinearSVC == X_test_targetSentiment)
svmc * 100
# # Decision Tree Classifier
#
from sklearn.tree import DecisionTreeClassifier
clf_decisionTree_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_decisionTree", DecisionTreeClassifier()),
]
)
clf_decisionTree_pipe.fit(X_train, X_train_targetSentiment)
predictedDecisionTree = clf_decisionTree_pipe.predict(X_test)
dtc = np.mean(predictedDecisionTree == X_test_targetSentiment)
dtc * 100
# # Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
clf_randomForest_pipe = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf_randomForest", RandomForestClassifier()),
]
)
clf_randomForest_pipe.fit(X_train, X_train_targetSentiment)
predictedRandomForest = clf_randomForest_pipe.predict(X_test)
rfc = np.mean(predictedRandomForest == X_test_targetSentiment)
rfc * 100
# # Performance Analysis of Support Vector Machine Classifier
from sklearn.model_selection import GridSearchCV
parameters = {
"vect__ngram_range": [(1, 1), (1, 2)],
"tfidf__use_idf": (True, False),
}
gs_clf_LinearSVC_pipe = GridSearchCV(clf_linearSVC_pipe, parameters, n_jobs=-1)
gs_clf_LinearSVC_pipe = gs_clf_LinearSVC_pipe.fit(X_train, X_train_targetSentiment)
# new_text = ["The tablet is good, really liked it.", # positive
# "The tablet is ok, but it works fine.", # neutral
# "The tablet is not good, does not work very well."] # negative
# X_train_targetSentiment[gs_clf_LinearSVC_pipe.predict(new_text)]
predictedGS_clf_LinearSVC_pipe = gs_clf_LinearSVC_pipe.predict(X_test)
GSLinearSVC = np.mean(predictedGS_clf_LinearSVC_pipe == X_test_targetSentiment)
GSLinearSVC * 100
for performance_analysis in (
gs_clf_LinearSVC_pipe.best_score_,
gs_clf_LinearSVC_pipe.best_estimator_,
gs_clf_LinearSVC_pipe.best_params_,
):
print(performance_analysis)
from sklearn import metrics
metrics.confusion_matrix(X_test_targetSentiment, predictedGS_clf_LinearSVC_pipe)
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
print(classification_report(X_test_targetSentiment, predictedGS_clf_LinearSVC_pipe))
print(
"Accuracy: {}".format(
accuracy_score(X_test_targetSentiment, predictedGS_clf_LinearSVC_pipe)
)
)
# # Summary
from prettytable import PrettyTable
Summary = PrettyTable(["Model Name", "Accuracy in %"])
Summary.add_row(["Multinominal Naive Bayes", "{:.2f}".format(mnb * 100)])
Summary.add_row(["Logistic Regression Classifier", "{:.2f}".format(lrc * 100)])
Summary.add_row(["Support Vector Machine Classifier", "{:.2f}".format(svmc * 100)])
Summary.add_row(["Decision Tree Classifier", "{:.2f}".format(dtc * 100)])
Summary.add_row(["Random Forest Classifier", "{:.2f}".format(rfc * 100)])
Summary.add_row(["GridSearchClf_LinearSVC", "{:.2f}".format(GSLinearSVC * 100)])
print(Summary)
| false | 0 | 3,466 | 9 | 4,064 | 3,466 |
||
129073676
|
<jupyter_start><jupyter_text>Breast Cancer Wisconsin (Diagnostic) Data Set
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image.
n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Attribute Information:
1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)
Ten real-valued features are computed for each cell nucleus:
a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension ("coastline approximation" - 1)
The mean, standard error and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
All feature values are recoded with four significant digits.
Missing attribute values: none
Class distribution: 357 benign, 212 malignant
Kaggle dataset identifier: breast-cancer-wisconsin-data
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns # data visualization library
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.io as pio
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import time
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# import warnings library
import warnings
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# ignore all warnings
warnings.filterwarnings("ignore")
# Any results you write to the current directory are saved as output.
# Veri İçeriği
# 1. 1. kimlik Numarası
# 1. 2. Teşhis (M = malign, B = iyi huylu)
# 1. 3. yarıçap (merkezden çevre üzerindeki noktalara olan mesafelerin ortalaması)
# 1. 4. doku (gri tonlama değerlerinin standart sapması)
# 1. 5. çevre
# 1. 6. alan
# 1. 7. pürüzsüzlük (yarıçap uzunluklarında yerel değişiklik)
# 1. 8. kompaktlık (çevre^2 / alan - 1,0)
# 1. 9. içbükeylik (konturun içbükey kısımlarının ciddiyeti)
# 1. 10. içbükey noktalar (konturun içbükey kısımlarının sayısı)
# 1. 11. simetri
# 1. 12. fraktal boyut ("kıyı şeridi yaklaşımı" - 1)
# 1. 13. Bu özelliklerin ortalaması, standart hatası ve "en kötü" veya en büyüğü (en büyük üç değerin ortalaması), her görüntü için hesaplandı ve sonuçta 30 özellik elde edildi. Örneğin, alan 3 Ortalama Yarıçap, alan 13 Yarıçap SE, alan 23 En Kötü Yarıçaptır.
# 1. 14. Tüm özellik değerleri, dört anlamlı basamakla yeniden kodlanır.
# 1. 15. Eksik özellik değerleri: yok
# 1. 16. Sınıf dağılımı: 357 iyi huylu, 212 kötü huylu
# ----------------------------
# Data Content
# 1. ID number
# 2. Diagnosis (M = malignant, B = benign)
# 3. radius (mean of distances from center to points on the perimeter)
# 4. texture (standard deviation of gray-scale values)
# 5. perimeter
# 6. area
# 7. smoothness (local variation in radius lengths)
# 8. compactness (perimeter^2 / area - 1.0)
# 9. concavity (severity of concave portions of the contour)
# 10. concave points (number of concave portions of the contour)
# 11. symmetry
# 12. fractal dimension ("coastline approximation" - 1)
# 13. The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.
# 14. All feature values are recoded with four significant digits.
# 15. Missing attribute values: none
# 16. Class distribution: 357 benign, 212 malignant
data = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
data.head()
# Dikkatimi çeken 4 şey var 1) Sınıflandırma için kullanılamayacak bir id var 2) Tanı bizim sınıf etiketimiz 3) Unnamed: 32 özelliği NaN içeriyor yani ihtiyacımız yok. 4) Diğer özellik adları hakkında hiçbir fikrim yok aslında ihtiyacım yok çünkü makine öğrenimi harika :)
# Bu nedenle, bu gereksiz özellikleri bırakın. Ancak bunun bir özellik seçimi olmadığını unutmayın. Bu bir pub'a göz atmak gibi, içeceğimizi henüz seçmiyoruz !!!
# feature names as a list
col = data.columns # .columns gives columns names in data
print(col)
# y includes our labels and x includes our features
y = data.diagnosis # M or B
list = ["Unnamed: 32", "id", "diagnosis"]
x = data.drop(list, axis=1)
x.head()
fig = px.histogram(y, x="diagnosis", color="diagnosis", width=700, height=500)
fig.show()
# Tamam, şimdi özelliklerimiz var ama ne anlama geliyorlar ya da aslında bu özellikler hakkında ne kadar bilmemiz gerekiyor? varyans, standart sapma, örnek sayısı (count) veya max min değerleri. Bu tür bilgiler, verilerde neler olup bittiğini anlamaya yardımcı olur. Örneğin, aklıma field_mean özelliğinin max değeri 2500 ve smoothness_mean özelliklerinin max 0.16340 olduğu sorusu geldi. Bu nedenle görselleştirme, özellik seçimi, özellik çıkarma veya sınıflandırmadan önce standartlaştırmaya veya normalleştirmeye ihtiyacımız var mı? Cevap evet ve hayır şaşırtıcı değil.
# Neyse adım adım gidelim ve görselleştirme ile başlayalım.
x.describe()
# görselleştirme
# Verileri görselleştirmek için, sizi bilgilendirmek ve arazilerin çeşitliliği için diğer çekirdeklerde kullanılmayan seaborn grafiklerini kullanacağız. Gerçek hayatta kullandığım şeyler çoğunlukla keman planı ve sürü planıdır. Unutmayın, özellik seçmiyoruz, bar kapısındaki içecek listesine bakmak gibi verileri öğrenmeye çalışıyoruz.
# Keman ve sürü grafiğinden önce normalleştirme veya standardizasyona ihtiyacımız var. Çünkü özelliklerin değerleri arasındaki farklar arsa üzerinde gözlemlenemeyecek kadar yüksektir. Özellikleri 3 grupta çiziyorum ve her grupta daha iyi gözlemlemek için 10 özellik var.
# first ten features
data_dia = y
data = x
data_n_2 = (data - data.mean()) / (data.std()) # standardization
data = pd.concat([y, data_n_2.iloc[:, 0:10]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
fig = px.violin(
data, y="value", x="features", color="diagnosis", box=True, points="all"
)
fig.show()
# Yukarıdaki grafiği birlikte yorumlayalım. Örneğin, texture_mean özelliğinde Malign ve Benign'in ortancası ayrılmış gibi görünüyor, bu nedenle sınıflandırma için iyi olabilir. Ancak fractal_dimension_mean özelliğinde Malign ve Benign'ın ortancası ayrılmış gibi görünmediğinden sınıflandırma için iyi bilgi vermez.
# Second ten features
data = pd.concat([y, data_n_2.iloc[:, 10:20]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
fig = px.violin(
data, y="value", x="features", color="diagnosis", box=True, points="all"
)
fig.show()
# third ten features
data = pd.concat([y, data_n_2.iloc[:, 20:31]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
fig = px.violin(
data, y="value", x="features", color="diagnosis", box=True, points="all"
)
fig.show()
# Yukarıdaki arsa hakkında bir şey daha yorumlayalım, concavity_worst ve concave point_worst değişkeni benzer görünüyor ama birbirleriyle ilişkili olup olmadıklarına nasıl karar verebiliriz. (Her zaman doğru değil ama temel olarak özellikler birbiriyle ilişkiliyse bunlardan birini bırakabiliriz)
# İki özelliği daha derinlemesine karşılaştırmak için ortak çizimi kullanalım. Buna aşağıdaki ortak arsada bakın, gerçekten ilişkilidir. Pearsonr değeri korelasyon değeridir ve 1 en yüksek değerdir. Dolayısıyla 0.86 korelasyonlu olduklarını söylemek için yeterli görünmektedir. Unutmayın, özellikleri henüz seçmiyoruz, sadece onlar hakkında fikir sahibi olmaya çalışıyoruz.
sns.set(style="white")
df = x.loc[:, ["radius_worst", "perimeter_worst", "area_worst"]]
g = sns.PairGrid(df, diag_sharey=False)
g.map_lower(sns.kdeplot, cmap="Blues_d")
g.map_upper(plt.scatter)
g.map_diag(sns.kdeplot, lw=3)
sns.set(style="whitegrid", palette="muted")
data_dia = y
data = x
data_n_2 = (data - data.mean()) / (data.std()) # standardization
data = pd.concat([y, data_n_2.iloc[:, 0:10]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
plt.figure(figsize=(10, 10))
tic = time.time()
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data)
plt.xticks(rotation=90)
data = pd.concat([y, data_n_2.iloc[:, 10:20]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
plt.figure(figsize=(10, 10))
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data)
plt.xticks(rotation=90)
data = pd.concat([y, data_n_2.iloc[:, 20:31]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
plt.figure(figsize=(10, 10))
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data)
toc = time.time()
plt.xticks(rotation=90)
print("swarm plot time: ", toc - tic, " s")
# Harika görünüyorlar. Ve varyansı daha net görebilirsiniz. Size bir soru sorayım, bu üç parselde hangi özellik sınıflandırma açısından daha net görünüyor. Bana göre son sürü arsasında area_worst kötü huylu ve iyi huylu gibi görünüyor, tamamen değil, çoğunlukla ayrılıyor. Ancak sürü arsa 2'deki pürüzsüzlük_se, kötü huylu ve iyi huylu gibi görünüyor, bu nedenle bu özelliği kullanırken sınıflandırmak zor.
# Ya özellikler arasındaki tüm korelasyonu gözlemlemek istiyorsak? Evet haklısın. Cevap, eski ama güçlü çizim yöntemi olan ısı haritasıdır.
dataa
def dummies(train_df: pd.DataFrame, columns):
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
train_df[columns] = le.fit_transform(train_df[columns])
train_df = pd.get_dummies(train_df, columns=[columns])
return train_df
dataa = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
dataa = dummies(dataa, "diagnosis")
dataa.head()
dataa["diagnosis"] = dataa["diagnosis_0"]
list = ["Unnamed: 32", "id", "diagnosis_1", "diagnosis_0"]
dataa = dataa.drop(list, axis=1)
# correlation map
f, ax = plt.subplots(figsize=(18, 18))
sns.heatmap(dataa.corr(), annot=True, linewidths=0.5, fmt=".1f", ax=ax)
# correlation map
f, ax = plt.subplots(figsize=(18, 18))
sns.heatmap(x.corr(), annot=True, linewidths=0.5, fmt=".1f", ax=ax)
import statsmodels.api as sm
def p_values(df, pred_df, row, col, liste: list):
"""
return X_l new train_dataframe for predict"""
global X_l
X = np.append(arr=np.ones((row, col)).astype(int), values=df, axis=1)
X_l = df.iloc[:, liste].values
X_l = pd.DataFrame(np.array(X_l, dtype=float))
model = sm.OLS(pred_df, X_l).fit()
return model.summary(), X_l
x
dataa1 = dataa.drop(labels="diagnosis", axis=1)
dataa_s = pd.DataFrame(dataa["diagnosis"])
p_values(dataa1, dataa_s, 569, 30, range(0, 30))
# Özellik Seçimi ve Rastgele Orman Sınıflandırması
# Bugün amacımız yeni kokteyller denemek. Mesela sonunda bir bardayız ve farklı tatlar içmek istiyoruz. Bu nedenle içeceklerin içeriklerini karşılaştırmamız gerekir. Bunlardan biri limon içeriyorsa onu içtikten sonra limon içeren diğer içecekleri elimine etmek gerekiyor ki çok farklı tatlar deneyimleyebilelim.
# Bu bölümde korelasyonlu özellik seçimi, tek değişkenli özellik seçimi, özyinelemeli özellik eleme (RFE), çapraz doğrulama ile özyinelemeli özellik eleme (RFECV) ve ağaç tabanlı özellik seçimi gibi farklı yöntemlerle öznitelik seçeceğiz. Modelimizi eğitmek ve tahmin etmek için rastgele orman sınıflandırması kullanacağız.
# 1) Korelasyon ve rastgele orman sınıflandırması ile özellik seçimi
# Haritada görüldüğü gibi ısı rakamı yarıçap_ortalama, çevre_ortalama ve alan_ortalama birbiriyle ilişkilidir, bu nedenle sadece alan_ortalama kullanacağız. Alan_mean'i nasıl bir özellik olarak kullanacağımı sorarsanız, aslında doğru bir cevap yok, sadece sürü grafiklerine bakıyorum ve alan_mean benim için net görünüyor ama denemeden diğer ilişkili özellikler arasında tam ayrım yapamayız. Öyleyse diğer ilişkili özellikleri bulalım ve rastgele orman sınıflandırıcı ile doğruluk görelim.
# Kompaktlık_ortalama, içbükeylik_ortalama ve içbükeylik_ortalama birbiriyle ilişkilidir. Bu nedenle sadece içbükeylik_ortalama'yı seçiyorum. Bunların dışında radius_se, perimeter_se ve field_se birbiriyle ilişkilidir ve ben sadece field_se kullanıyorum. yarıçap_en kötü, çevre_en kötü ve alan_en kötü birbiriyle ilişkilidir, bu yüzden ben en kötü alan kullanıyorum. Kompaktlık_en kötü, içbükey_en kötü ve içbükey noktalar_en kötü bu yüzden içbükey_en kötü olanı kullanıyorum. Compactness_se, concavity_se ve concave points_se bu yüzden concavity_se kullanıyorum. texture_mean ve texture_worst birbiriyle ilişkilidir ve ben texture_mean kullanıyorum. field_worst ve area_mean ilişkilidir, ben field_mean kullanıyorum.
drop_list1 = [
"perimeter_mean",
"radius_mean",
"compactness_mean",
"concave points_mean",
"radius_se",
"perimeter_se",
"radius_worst",
"perimeter_worst",
"compactness_worst",
"concave points_worst",
"compactness_se",
"concave points_se",
"texture_worst",
"area_worst",
]
x_1 = x.drop(drop_list1, axis=1) # do not modify x, we will use it later
x_1.head()
# Düşürme korelasyonlu özelliklerden sonra, aşağıdaki korelasyon matrisinde de görülebileceği gibi, artık korelasyonlu özellik kalmamıştır. Aslında 0.9 korelasyon değeri olduğunu biliyorum ve görüyorsunuz ama onu düşürmezsek ne olacağını birlikte görelim.
# correlation map
f, ax = plt.subplots(figsize=(14, 14))
sns.heatmap(x_1.corr(), annot=True, linewidths=0.5, fmt=".1f", ax=ax)
# Peki özelliklerimizi seçiyoruz ama doğru mu seçmişiz? Rastgele ormanı kullanalım ve seçilen özelliklere göre doğruluğu bulalım.
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score, confusion_matrix
from sklearn.metrics import accuracy_score
# split data train 70 % and test 30 %
x_train, x_test, y_train, y_test = train_test_split(
x_1, y, test_size=0.3, random_state=42
)
# random forest classifier with n_estimators=10 (default)
clf_rf = RandomForestClassifier(random_state=43)
clr_rf = clf_rf.fit(x_train, y_train)
ac = accuracy_score(y_test, clf_rf.predict(x_test))
print("Accuracy is: ", ac)
cm = confusion_matrix(y_test, clf_rf.predict(x_test))
sns.heatmap(cm, annot=True, fmt="d")
# Doğruluk yaklaşık %95'tir ve karışıklık matrisinden de görülebileceği gibi çok az yanlış tahminde bulunuruz. Şimdi daha iyi sonuçlar bulmak için diğer özellik seçim yöntemlerini görelim.
# 2) Tek değişkenli özellik seçimi ve rastgele orman sınıflandırması
# Tek değişkenli özellik seçiminde, k en yüksek puanlama özelliği dışındaki tüm özellikleri kaldıran SelectKBest'i kullanacağız. http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html#sklearn.feature_selection.SelectKBest
# Bu yöntemde kaç tane özellik kullanacağımızı seçmemiz gerekiyor. Örneğin, k (özellik sayısı) 5 mi, 10 mu, 15 mi olacak? Cevap sadece deniyor veya sezgisel olarak. Tüm kombinasyonları denemiyorum ama sadece k = 5'i seçiyorum ve en iyi 5 özelliği buluyorum.
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# find best scored 5 features
select_feature = SelectKBest(chi2, k=5).fit(x_train, y_train)
print("Score list:", select_feature.scores_)
print("Feature list:", x_train.columns)
# Sınıflandırılacak en iyi 5 özellik, rea_mean, area_se, texture_mean, concavity_worst and concavity_mean. Öyleyse, yalnızca bu en iyi puan alan 5 özelliği kullanırsak ne olacağını görelim.
x_train_2 = select_feature.transform(x_train)
x_test_2 = select_feature.transform(x_test)
# random forest classifier with n_estimators=10 (default)
clf_rf_2 = RandomForestClassifier()
clr_rf_2 = clf_rf_2.fit(x_train_2, y_train)
ac_2 = accuracy_score(y_test, clf_rf_2.predict(x_test_2))
print("Accuracy is: ", ac_2)
cm_2 = confusion_matrix(y_test, clf_rf_2.predict(x_test_2))
sns.heatmap(cm_2, annot=True, fmt="d")
# Doğruluk yaklaşık %96'dır ve karışıklık matrisinden de görülebileceği gibi çok az yanlış tahminde bulunuruz. Şimdiye kadar yaptığımız şey, özellikleri korelasyon matrisine ve selectkBest yöntemine göre seçmekti. SelectkBest yönteminde 5 özellik kullanmamıza rağmen doğrulukları benzer görünmektedir. Şimdi daha iyi sonuçlar bulmak için diğer özellik seçim yöntemlerini görelim.
# 3) Rastgele orman ile özyinelemeli özellik eleme (RFE)
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html Temel olarak, sınıflandırma yöntemlerinden birini kullanır (bizim örneğimizde rastgele orman), her bir özelliğe ağırlık atayın. Mutlak ağırlıkları en küçük olanların mevcut set özelliklerinden budanır. Bu prosedür, budanmış sette istenen sayıda özellik elde edilene kadar yinelemeli olarak tekrarlanır.
# Önceki yöntemde olduğu gibi, 5 özellik kullanacağız. Ancak hangi 5 özelliği kullanacağız? Bunları RFE yöntemi ile seçeceğiz.
from sklearn.feature_selection import RFE
# Create the RFE object and rank each pixel
clf_rf_3 = RandomForestClassifier()
rfe = RFE(estimator=clf_rf_3, n_features_to_select=5, step=1)
rfe = rfe.fit(x_train, y_train)
print("Chosen best 5 feature by rfe:", x_train.columns[rfe.support_])
# rfe tarafından seçilen en iyi 5 özellik, texture_mean, field_mean, concavity_mean, field_se, concavity_worst. Önceki (selectkBest) yöntemine tamamen benzerler. Bu nedenle doğruluğu tekrar hesaplamamıza gerek yok. Kısaca rfe ve selectkBest yöntemleri ile iyi bir özellik seçimi yaptığımızı söyleyebiliriz. Ancak gördüğünüz gibi bir sorun var tamam ben en iyi 5 özelliğini iki farklı yöntemle buluyoruz ve bu özellikler aynı ama neden 5. Belki en iyi 2 veya en iyi 15 özelliğini kullanırsak daha iyi doğruluk elde ederiz. Bu nedenle, rfecv yöntemiyle kaç tane özellik kullanmamız gerektiğine bakalım.
# 4) Çapraz doğrulama ve rastgele orman sınıflandırması ile özyinelemeli özellik eleme
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html Şimdi sadece en iyi özellikleri değil, aynı zamanda en iyi doğruluk için kaç tane özelliğe ihtiyacımız olduğunu da bulacağız.
from sklearn.feature_selection import RFECV
# The "accuracy" scoring is proportional to the number of correct classifications
clf_rf_4 = RandomForestClassifier()
rfecv = RFECV(
estimator=clf_rf_4, step=1, cv=5, scoring="accuracy"
) # 5-fold cross-validation
rfecv = rfecv.fit(x_train, y_train)
print("Optimal number of features :", rfecv.n_features_)
print("Best features :", x_train.columns[rfecv.support_])
# Son olarak, en iyi sınıflandırma için texture_mean, field_mean, concavity_mean, texture_se, field_se, concavity_se,chemistry_se, smoothness_worst, concavity_worst,chemistry_worst ve fractal_dimension_worst olan en iyi 11 özelliği bulduk. Arsa ile en iyi doğruluğa bakalım.
# 5) Tree based feature selection and random forest classification
# http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html In random forest classification method there is a featureimportances attributes that is the feature importances (the higher, the more important the feature). !!! To use feature_importance method, in training data there should not be correlated features. Random forest choose randomly at each iteration, therefore sequence of feature importance list can change.
#
clf_rf_5 = RandomForestClassifier()
clr_rf_5 = clf_rf_5.fit(x_train, y_train)
importances = clr_rf_5.feature_importances_
std = np.std([tree.feature_importances_ for tree in clf_rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(1, figsize=(14, 13))
plt.title("Feature importances")
plt.bar(
range(x_train.shape[1]),
importances[indices],
color="g",
yerr=std[indices],
align="center",
)
plt.xticks(range(x_train.shape[1]), x_train.columns[indices], rotation=90)
plt.xlim([-1, x_train.shape[1]])
plt.show()
# Yukarıdaki grafikte de görebileceğiniz gibi, en iyi 5 özellikten sonra özelliklerin önemi azalır. Bu nedenle bu 5 özelliğe odaklanabiliriz. Daha önce üzüldüğüm gibi, özellikleri anlamaya ve en iyisini bulmaya önem veriyorum.
# PCA ile Özellik Çıkarma
# http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html Özellik çıkarımı için temel bileşen analizi (PCA) kullanacağız. PCA'dan önce, PCA'nın daha iyi performansı için verileri normalleştirmemiz gerekiyor.
# split data train 70 % and test 30 %
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=42
)
# normalization
x_train_N = (x_train - x_train.mean()) / (x_train.max() - x_train.min())
x_test_N = (x_test - x_test.mean()) / (x_test.max() - x_test.min())
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(x_train_N)
plt.figure(1, figsize=(14, 13))
plt.clf()
plt.axes([0.2, 0.2, 0.7, 0.7])
plt.plot(pca.explained_variance_ratio_, linewidth=2)
plt.axis("tight")
plt.xlabel("n_components")
plt.ylabel("explained_variance_ratio_")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/073/129073676.ipynb
|
breast-cancer-wisconsin-data
| null |
[{"Id": 129073676, "ScriptId": 38361348, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9683291, "CreationDate": "05/10/2023 19:15:26", "VersionNumber": 2.0, "Title": "Breast Cancer Wisconsin Feature Selection and CNN", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 370.0, "LinesInsertedFromPrevious": 277.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 93.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184805954, "KernelVersionId": 129073676, "SourceDatasetVersionId": 408}]
|
[{"Id": 408, "DatasetId": 180, "DatasourceVersionId": 408, "CreatorUserId": 711301, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "09/25/2016 10:49:04", "VersionNumber": 2.0, "Title": "Breast Cancer Wisconsin (Diagnostic) Data Set", "Slug": "breast-cancer-wisconsin-data", "Subtitle": "Predict whether the cancer is benign or malignant", "Description": "Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. \nn the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: \"Robust Linear Programming Discrimination of Two Linearly Inseparable Sets\", Optimization Methods and Software 1, 1992, 23-34]. \n\nThis database is also available through the UW CS ftp server: \nftp ftp.cs.wisc.edu \ncd math-prog/cpo-dataset/machine-learn/WDBC/\n\nAlso can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29\n\nAttribute Information:\n\n1) ID number \n2) Diagnosis (M = malignant, B = benign) \n3-32) \n\nTen real-valued features are computed for each cell nucleus: \n\na) radius (mean of distances from center to points on the perimeter) \nb) texture (standard deviation of gray-scale values) \nc) perimeter \nd) area \ne) smoothness (local variation in radius lengths) \nf) compactness (perimeter^2 / area - 1.0) \ng) concavity (severity of concave portions of the contour) \nh) concave points (number of concave portions of the contour) \ni) symmetry \nj) fractal dimension (\"coastline approximation\" - 1)\n\nThe mean, standard error and \"worst\" or largest (mean of the three\nlargest values) of these features were computed for each image,\nresulting in 30 features. For instance, field 3 is Mean Radius, field\n13 is Radius SE, field 23 is Worst Radius.\n\nAll feature values are recoded with four significant digits.\n\nMissing attribute values: none\n\nClass distribution: 357 benign, 212 malignant", "VersionNotes": "This updated dataset has column names added", "TotalCompressedBytes": 125204.0, "TotalUncompressedBytes": 125204.0}]
|
[{"Id": 180, "CreatorUserId": 711301, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 408.0, "CurrentDatasourceVersionId": 408.0, "ForumId": 1547, "Type": 2, "CreationDate": "09/19/2016 20:27:05", "LastActivityDate": "02/06/2018", "TotalViews": 1744898, "TotalDownloads": 301790, "TotalVotes": 3191, "TotalKernels": 2628}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns # data visualization library
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.io as pio
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import time
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# import warnings library
import warnings
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# ignore all warnings
warnings.filterwarnings("ignore")
# Any results you write to the current directory are saved as output.
# Veri İçeriği
# 1. 1. kimlik Numarası
# 1. 2. Teşhis (M = malign, B = iyi huylu)
# 1. 3. yarıçap (merkezden çevre üzerindeki noktalara olan mesafelerin ortalaması)
# 1. 4. doku (gri tonlama değerlerinin standart sapması)
# 1. 5. çevre
# 1. 6. alan
# 1. 7. pürüzsüzlük (yarıçap uzunluklarında yerel değişiklik)
# 1. 8. kompaktlık (çevre^2 / alan - 1,0)
# 1. 9. içbükeylik (konturun içbükey kısımlarının ciddiyeti)
# 1. 10. içbükey noktalar (konturun içbükey kısımlarının sayısı)
# 1. 11. simetri
# 1. 12. fraktal boyut ("kıyı şeridi yaklaşımı" - 1)
# 1. 13. Bu özelliklerin ortalaması, standart hatası ve "en kötü" veya en büyüğü (en büyük üç değerin ortalaması), her görüntü için hesaplandı ve sonuçta 30 özellik elde edildi. Örneğin, alan 3 Ortalama Yarıçap, alan 13 Yarıçap SE, alan 23 En Kötü Yarıçaptır.
# 1. 14. Tüm özellik değerleri, dört anlamlı basamakla yeniden kodlanır.
# 1. 15. Eksik özellik değerleri: yok
# 1. 16. Sınıf dağılımı: 357 iyi huylu, 212 kötü huylu
# ----------------------------
# Data Content
# 1. ID number
# 2. Diagnosis (M = malignant, B = benign)
# 3. radius (mean of distances from center to points on the perimeter)
# 4. texture (standard deviation of gray-scale values)
# 5. perimeter
# 6. area
# 7. smoothness (local variation in radius lengths)
# 8. compactness (perimeter^2 / area - 1.0)
# 9. concavity (severity of concave portions of the contour)
# 10. concave points (number of concave portions of the contour)
# 11. symmetry
# 12. fractal dimension ("coastline approximation" - 1)
# 13. The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.
# 14. All feature values are recoded with four significant digits.
# 15. Missing attribute values: none
# 16. Class distribution: 357 benign, 212 malignant
data = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
data.head()
# Dikkatimi çeken 4 şey var 1) Sınıflandırma için kullanılamayacak bir id var 2) Tanı bizim sınıf etiketimiz 3) Unnamed: 32 özelliği NaN içeriyor yani ihtiyacımız yok. 4) Diğer özellik adları hakkında hiçbir fikrim yok aslında ihtiyacım yok çünkü makine öğrenimi harika :)
# Bu nedenle, bu gereksiz özellikleri bırakın. Ancak bunun bir özellik seçimi olmadığını unutmayın. Bu bir pub'a göz atmak gibi, içeceğimizi henüz seçmiyoruz !!!
# feature names as a list
col = data.columns # .columns gives columns names in data
print(col)
# y includes our labels and x includes our features
y = data.diagnosis # M or B
list = ["Unnamed: 32", "id", "diagnosis"]
x = data.drop(list, axis=1)
x.head()
fig = px.histogram(y, x="diagnosis", color="diagnosis", width=700, height=500)
fig.show()
# Tamam, şimdi özelliklerimiz var ama ne anlama geliyorlar ya da aslında bu özellikler hakkında ne kadar bilmemiz gerekiyor? varyans, standart sapma, örnek sayısı (count) veya max min değerleri. Bu tür bilgiler, verilerde neler olup bittiğini anlamaya yardımcı olur. Örneğin, aklıma field_mean özelliğinin max değeri 2500 ve smoothness_mean özelliklerinin max 0.16340 olduğu sorusu geldi. Bu nedenle görselleştirme, özellik seçimi, özellik çıkarma veya sınıflandırmadan önce standartlaştırmaya veya normalleştirmeye ihtiyacımız var mı? Cevap evet ve hayır şaşırtıcı değil.
# Neyse adım adım gidelim ve görselleştirme ile başlayalım.
x.describe()
# görselleştirme
# Verileri görselleştirmek için, sizi bilgilendirmek ve arazilerin çeşitliliği için diğer çekirdeklerde kullanılmayan seaborn grafiklerini kullanacağız. Gerçek hayatta kullandığım şeyler çoğunlukla keman planı ve sürü planıdır. Unutmayın, özellik seçmiyoruz, bar kapısındaki içecek listesine bakmak gibi verileri öğrenmeye çalışıyoruz.
# Keman ve sürü grafiğinden önce normalleştirme veya standardizasyona ihtiyacımız var. Çünkü özelliklerin değerleri arasındaki farklar arsa üzerinde gözlemlenemeyecek kadar yüksektir. Özellikleri 3 grupta çiziyorum ve her grupta daha iyi gözlemlemek için 10 özellik var.
# first ten features
data_dia = y
data = x
data_n_2 = (data - data.mean()) / (data.std()) # standardization
data = pd.concat([y, data_n_2.iloc[:, 0:10]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
fig = px.violin(
data, y="value", x="features", color="diagnosis", box=True, points="all"
)
fig.show()
# Yukarıdaki grafiği birlikte yorumlayalım. Örneğin, texture_mean özelliğinde Malign ve Benign'in ortancası ayrılmış gibi görünüyor, bu nedenle sınıflandırma için iyi olabilir. Ancak fractal_dimension_mean özelliğinde Malign ve Benign'ın ortancası ayrılmış gibi görünmediğinden sınıflandırma için iyi bilgi vermez.
# Second ten features
data = pd.concat([y, data_n_2.iloc[:, 10:20]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
fig = px.violin(
data, y="value", x="features", color="diagnosis", box=True, points="all"
)
fig.show()
# third ten features
data = pd.concat([y, data_n_2.iloc[:, 20:31]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
fig = px.violin(
data, y="value", x="features", color="diagnosis", box=True, points="all"
)
fig.show()
# Yukarıdaki arsa hakkında bir şey daha yorumlayalım, concavity_worst ve concave point_worst değişkeni benzer görünüyor ama birbirleriyle ilişkili olup olmadıklarına nasıl karar verebiliriz. (Her zaman doğru değil ama temel olarak özellikler birbiriyle ilişkiliyse bunlardan birini bırakabiliriz)
# İki özelliği daha derinlemesine karşılaştırmak için ortak çizimi kullanalım. Buna aşağıdaki ortak arsada bakın, gerçekten ilişkilidir. Pearsonr değeri korelasyon değeridir ve 1 en yüksek değerdir. Dolayısıyla 0.86 korelasyonlu olduklarını söylemek için yeterli görünmektedir. Unutmayın, özellikleri henüz seçmiyoruz, sadece onlar hakkında fikir sahibi olmaya çalışıyoruz.
sns.set(style="white")
df = x.loc[:, ["radius_worst", "perimeter_worst", "area_worst"]]
g = sns.PairGrid(df, diag_sharey=False)
g.map_lower(sns.kdeplot, cmap="Blues_d")
g.map_upper(plt.scatter)
g.map_diag(sns.kdeplot, lw=3)
sns.set(style="whitegrid", palette="muted")
data_dia = y
data = x
data_n_2 = (data - data.mean()) / (data.std()) # standardization
data = pd.concat([y, data_n_2.iloc[:, 0:10]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
plt.figure(figsize=(10, 10))
tic = time.time()
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data)
plt.xticks(rotation=90)
data = pd.concat([y, data_n_2.iloc[:, 10:20]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
plt.figure(figsize=(10, 10))
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data)
plt.xticks(rotation=90)
data = pd.concat([y, data_n_2.iloc[:, 20:31]], axis=1)
data = pd.melt(data, id_vars="diagnosis", var_name="features", value_name="value")
plt.figure(figsize=(10, 10))
sns.swarmplot(x="features", y="value", hue="diagnosis", data=data)
toc = time.time()
plt.xticks(rotation=90)
print("swarm plot time: ", toc - tic, " s")
# Harika görünüyorlar. Ve varyansı daha net görebilirsiniz. Size bir soru sorayım, bu üç parselde hangi özellik sınıflandırma açısından daha net görünüyor. Bana göre son sürü arsasında area_worst kötü huylu ve iyi huylu gibi görünüyor, tamamen değil, çoğunlukla ayrılıyor. Ancak sürü arsa 2'deki pürüzsüzlük_se, kötü huylu ve iyi huylu gibi görünüyor, bu nedenle bu özelliği kullanırken sınıflandırmak zor.
# Ya özellikler arasındaki tüm korelasyonu gözlemlemek istiyorsak? Evet haklısın. Cevap, eski ama güçlü çizim yöntemi olan ısı haritasıdır.
dataa
def dummies(train_df: pd.DataFrame, columns):
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
train_df[columns] = le.fit_transform(train_df[columns])
train_df = pd.get_dummies(train_df, columns=[columns])
return train_df
dataa = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
dataa = dummies(dataa, "diagnosis")
dataa.head()
dataa["diagnosis"] = dataa["diagnosis_0"]
list = ["Unnamed: 32", "id", "diagnosis_1", "diagnosis_0"]
dataa = dataa.drop(list, axis=1)
# correlation map
f, ax = plt.subplots(figsize=(18, 18))
sns.heatmap(dataa.corr(), annot=True, linewidths=0.5, fmt=".1f", ax=ax)
# correlation map
f, ax = plt.subplots(figsize=(18, 18))
sns.heatmap(x.corr(), annot=True, linewidths=0.5, fmt=".1f", ax=ax)
import statsmodels.api as sm
def p_values(df, pred_df, row, col, liste: list):
"""
return X_l new train_dataframe for predict"""
global X_l
X = np.append(arr=np.ones((row, col)).astype(int), values=df, axis=1)
X_l = df.iloc[:, liste].values
X_l = pd.DataFrame(np.array(X_l, dtype=float))
model = sm.OLS(pred_df, X_l).fit()
return model.summary(), X_l
x
dataa1 = dataa.drop(labels="diagnosis", axis=1)
dataa_s = pd.DataFrame(dataa["diagnosis"])
p_values(dataa1, dataa_s, 569, 30, range(0, 30))
# Özellik Seçimi ve Rastgele Orman Sınıflandırması
# Bugün amacımız yeni kokteyller denemek. Mesela sonunda bir bardayız ve farklı tatlar içmek istiyoruz. Bu nedenle içeceklerin içeriklerini karşılaştırmamız gerekir. Bunlardan biri limon içeriyorsa onu içtikten sonra limon içeren diğer içecekleri elimine etmek gerekiyor ki çok farklı tatlar deneyimleyebilelim.
# Bu bölümde korelasyonlu özellik seçimi, tek değişkenli özellik seçimi, özyinelemeli özellik eleme (RFE), çapraz doğrulama ile özyinelemeli özellik eleme (RFECV) ve ağaç tabanlı özellik seçimi gibi farklı yöntemlerle öznitelik seçeceğiz. Modelimizi eğitmek ve tahmin etmek için rastgele orman sınıflandırması kullanacağız.
# 1) Korelasyon ve rastgele orman sınıflandırması ile özellik seçimi
# Haritada görüldüğü gibi ısı rakamı yarıçap_ortalama, çevre_ortalama ve alan_ortalama birbiriyle ilişkilidir, bu nedenle sadece alan_ortalama kullanacağız. Alan_mean'i nasıl bir özellik olarak kullanacağımı sorarsanız, aslında doğru bir cevap yok, sadece sürü grafiklerine bakıyorum ve alan_mean benim için net görünüyor ama denemeden diğer ilişkili özellikler arasında tam ayrım yapamayız. Öyleyse diğer ilişkili özellikleri bulalım ve rastgele orman sınıflandırıcı ile doğruluk görelim.
# Kompaktlık_ortalama, içbükeylik_ortalama ve içbükeylik_ortalama birbiriyle ilişkilidir. Bu nedenle sadece içbükeylik_ortalama'yı seçiyorum. Bunların dışında radius_se, perimeter_se ve field_se birbiriyle ilişkilidir ve ben sadece field_se kullanıyorum. yarıçap_en kötü, çevre_en kötü ve alan_en kötü birbiriyle ilişkilidir, bu yüzden ben en kötü alan kullanıyorum. Kompaktlık_en kötü, içbükey_en kötü ve içbükey noktalar_en kötü bu yüzden içbükey_en kötü olanı kullanıyorum. Compactness_se, concavity_se ve concave points_se bu yüzden concavity_se kullanıyorum. texture_mean ve texture_worst birbiriyle ilişkilidir ve ben texture_mean kullanıyorum. field_worst ve area_mean ilişkilidir, ben field_mean kullanıyorum.
drop_list1 = [
"perimeter_mean",
"radius_mean",
"compactness_mean",
"concave points_mean",
"radius_se",
"perimeter_se",
"radius_worst",
"perimeter_worst",
"compactness_worst",
"concave points_worst",
"compactness_se",
"concave points_se",
"texture_worst",
"area_worst",
]
x_1 = x.drop(drop_list1, axis=1) # do not modify x, we will use it later
x_1.head()
# Düşürme korelasyonlu özelliklerden sonra, aşağıdaki korelasyon matrisinde de görülebileceği gibi, artık korelasyonlu özellik kalmamıştır. Aslında 0.9 korelasyon değeri olduğunu biliyorum ve görüyorsunuz ama onu düşürmezsek ne olacağını birlikte görelim.
# correlation map
f, ax = plt.subplots(figsize=(14, 14))
sns.heatmap(x_1.corr(), annot=True, linewidths=0.5, fmt=".1f", ax=ax)
# Peki özelliklerimizi seçiyoruz ama doğru mu seçmişiz? Rastgele ormanı kullanalım ve seçilen özelliklere göre doğruluğu bulalım.
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score, confusion_matrix
from sklearn.metrics import accuracy_score
# split data train 70 % and test 30 %
x_train, x_test, y_train, y_test = train_test_split(
x_1, y, test_size=0.3, random_state=42
)
# random forest classifier with n_estimators=10 (default)
clf_rf = RandomForestClassifier(random_state=43)
clr_rf = clf_rf.fit(x_train, y_train)
ac = accuracy_score(y_test, clf_rf.predict(x_test))
print("Accuracy is: ", ac)
cm = confusion_matrix(y_test, clf_rf.predict(x_test))
sns.heatmap(cm, annot=True, fmt="d")
# Doğruluk yaklaşık %95'tir ve karışıklık matrisinden de görülebileceği gibi çok az yanlış tahminde bulunuruz. Şimdi daha iyi sonuçlar bulmak için diğer özellik seçim yöntemlerini görelim.
# 2) Tek değişkenli özellik seçimi ve rastgele orman sınıflandırması
# Tek değişkenli özellik seçiminde, k en yüksek puanlama özelliği dışındaki tüm özellikleri kaldıran SelectKBest'i kullanacağız. http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html#sklearn.feature_selection.SelectKBest
# Bu yöntemde kaç tane özellik kullanacağımızı seçmemiz gerekiyor. Örneğin, k (özellik sayısı) 5 mi, 10 mu, 15 mi olacak? Cevap sadece deniyor veya sezgisel olarak. Tüm kombinasyonları denemiyorum ama sadece k = 5'i seçiyorum ve en iyi 5 özelliği buluyorum.
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# find best scored 5 features
select_feature = SelectKBest(chi2, k=5).fit(x_train, y_train)
print("Score list:", select_feature.scores_)
print("Feature list:", x_train.columns)
# Sınıflandırılacak en iyi 5 özellik, rea_mean, area_se, texture_mean, concavity_worst and concavity_mean. Öyleyse, yalnızca bu en iyi puan alan 5 özelliği kullanırsak ne olacağını görelim.
x_train_2 = select_feature.transform(x_train)
x_test_2 = select_feature.transform(x_test)
# random forest classifier with n_estimators=10 (default)
clf_rf_2 = RandomForestClassifier()
clr_rf_2 = clf_rf_2.fit(x_train_2, y_train)
ac_2 = accuracy_score(y_test, clf_rf_2.predict(x_test_2))
print("Accuracy is: ", ac_2)
cm_2 = confusion_matrix(y_test, clf_rf_2.predict(x_test_2))
sns.heatmap(cm_2, annot=True, fmt="d")
# Doğruluk yaklaşık %96'dır ve karışıklık matrisinden de görülebileceği gibi çok az yanlış tahminde bulunuruz. Şimdiye kadar yaptığımız şey, özellikleri korelasyon matrisine ve selectkBest yöntemine göre seçmekti. SelectkBest yönteminde 5 özellik kullanmamıza rağmen doğrulukları benzer görünmektedir. Şimdi daha iyi sonuçlar bulmak için diğer özellik seçim yöntemlerini görelim.
# 3) Rastgele orman ile özyinelemeli özellik eleme (RFE)
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html Temel olarak, sınıflandırma yöntemlerinden birini kullanır (bizim örneğimizde rastgele orman), her bir özelliğe ağırlık atayın. Mutlak ağırlıkları en küçük olanların mevcut set özelliklerinden budanır. Bu prosedür, budanmış sette istenen sayıda özellik elde edilene kadar yinelemeli olarak tekrarlanır.
# Önceki yöntemde olduğu gibi, 5 özellik kullanacağız. Ancak hangi 5 özelliği kullanacağız? Bunları RFE yöntemi ile seçeceğiz.
from sklearn.feature_selection import RFE
# Create the RFE object and rank each pixel
clf_rf_3 = RandomForestClassifier()
rfe = RFE(estimator=clf_rf_3, n_features_to_select=5, step=1)
rfe = rfe.fit(x_train, y_train)
print("Chosen best 5 feature by rfe:", x_train.columns[rfe.support_])
# rfe tarafından seçilen en iyi 5 özellik, texture_mean, field_mean, concavity_mean, field_se, concavity_worst. Önceki (selectkBest) yöntemine tamamen benzerler. Bu nedenle doğruluğu tekrar hesaplamamıza gerek yok. Kısaca rfe ve selectkBest yöntemleri ile iyi bir özellik seçimi yaptığımızı söyleyebiliriz. Ancak gördüğünüz gibi bir sorun var tamam ben en iyi 5 özelliğini iki farklı yöntemle buluyoruz ve bu özellikler aynı ama neden 5. Belki en iyi 2 veya en iyi 15 özelliğini kullanırsak daha iyi doğruluk elde ederiz. Bu nedenle, rfecv yöntemiyle kaç tane özellik kullanmamız gerektiğine bakalım.
# 4) Çapraz doğrulama ve rastgele orman sınıflandırması ile özyinelemeli özellik eleme
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html Şimdi sadece en iyi özellikleri değil, aynı zamanda en iyi doğruluk için kaç tane özelliğe ihtiyacımız olduğunu da bulacağız.
from sklearn.feature_selection import RFECV
# The "accuracy" scoring is proportional to the number of correct classifications
clf_rf_4 = RandomForestClassifier()
rfecv = RFECV(
estimator=clf_rf_4, step=1, cv=5, scoring="accuracy"
) # 5-fold cross-validation
rfecv = rfecv.fit(x_train, y_train)
print("Optimal number of features :", rfecv.n_features_)
print("Best features :", x_train.columns[rfecv.support_])
# Son olarak, en iyi sınıflandırma için texture_mean, field_mean, concavity_mean, texture_se, field_se, concavity_se,chemistry_se, smoothness_worst, concavity_worst,chemistry_worst ve fractal_dimension_worst olan en iyi 11 özelliği bulduk. Arsa ile en iyi doğruluğa bakalım.
# 5) Tree based feature selection and random forest classification
# http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html In random forest classification method there is a featureimportances attributes that is the feature importances (the higher, the more important the feature). !!! To use feature_importance method, in training data there should not be correlated features. Random forest choose randomly at each iteration, therefore sequence of feature importance list can change.
#
clf_rf_5 = RandomForestClassifier()
clr_rf_5 = clf_rf_5.fit(x_train, y_train)
importances = clr_rf_5.feature_importances_
std = np.std([tree.feature_importances_ for tree in clf_rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(1, figsize=(14, 13))
plt.title("Feature importances")
plt.bar(
range(x_train.shape[1]),
importances[indices],
color="g",
yerr=std[indices],
align="center",
)
plt.xticks(range(x_train.shape[1]), x_train.columns[indices], rotation=90)
plt.xlim([-1, x_train.shape[1]])
plt.show()
# Yukarıdaki grafikte de görebileceğiniz gibi, en iyi 5 özellikten sonra özelliklerin önemi azalır. Bu nedenle bu 5 özelliğe odaklanabiliriz. Daha önce üzüldüğüm gibi, özellikleri anlamaya ve en iyisini bulmaya önem veriyorum.
# PCA ile Özellik Çıkarma
# http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html Özellik çıkarımı için temel bileşen analizi (PCA) kullanacağız. PCA'dan önce, PCA'nın daha iyi performansı için verileri normalleştirmemiz gerekiyor.
# split data train 70 % and test 30 %
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=42
)
# normalization
x_train_N = (x_train - x_train.mean()) / (x_train.max() - x_train.min())
x_test_N = (x_test - x_test.mean()) / (x_test.max() - x_test.min())
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(x_train_N)
plt.figure(1, figsize=(14, 13))
plt.clf()
plt.axes([0.2, 0.2, 0.7, 0.7])
plt.plot(pca.explained_variance_ratio_, linewidth=2)
plt.axis("tight")
plt.xlabel("n_components")
plt.ylabel("explained_variance_ratio_")
| false | 0 | 7,638 | 0 | 8,164 | 7,638 |
||
129073930
|
import numpy as np
import pandas as pd
import optuna
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import KFold
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from catboost import CatBoostRegressor
from tqdm import tqdm
from sklearn.metrics import mean_squared_error
import os
for dirname, _, filenames in os.walk("/kaggle/input/playground-series-s3e14"):
for filename in filenames:
print(os.path.join(dirname, filename))
# # Competition Page
# https://www.kaggle.com/competitions/playground-series-s3e14
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
# ### Drop id from train and test set
train.drop(columns=["id"], inplace=True)
test.drop(columns=["id"], inplace=True)
# ### Checking for null and duplicates
train.isna().sum()
train.duplicated().sum()
train.drop_duplicates(inplace=True)
train.reset_index(drop=True, inplace=True)
for i in train.columns:
print(train[i].value_counts())
# ### Summary stats
train.describe()
# ### Co relation heat map
corr = train[["fruitset", "fruitmass", "seeds", "yield"]].corr()
corr.style.background_gradient(cmap="coolwarm")
# ### Plotting the continuous predictor variables
plt.figure(figsize=(10, 10))
for i in range(len(["fruitset", "fruitmass", "seeds"])):
plt.subplot(3, 3, i + 1)
sns.histplot(x=train[(["fruitset", "fruitmass", "seeds"])[i]], kde=True)
plt.tight_layout()
# ### Plotting the boxplot of continuous predictor variables
plt.figure(figsize=(10, 10))
for i in range(len(["fruitset", "fruitmass", "seeds"])):
plt.subplot(3, 3, i + 1)
sns.boxplot(y=train[(["fruitset", "fruitmass", "seeds"])[i]])
plt.tight_layout()
# ### Plotting the categorical variables
plt.figure(figsize=(12, 12))
for i in range(
len(
[
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
]
)
):
plt.subplot(5, 3, i + 1)
sns.countplot(
x=train[
(
[
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
]
)[i]
]
)
plt.tight_layout()
# ### Plotting the target variable
sns.histplot(x=train["yield"], kde=True)
plt.show()
# ### Separating features and target variable
y = train["yield"]
train.drop(columns=["yield"], inplace=True)
# ### Optuna parameter tuning- XG Boost
def obj_xg(trial):
params = {
"max_depth": trial.suggest_int("max_depth", 1, 10),
"learning_rate": trial.suggest_float("learning_rate", 0.1, 1),
"n_estimators": trial.suggest_int("n_estimators", 200, 1000),
"gamma": trial.suggest_float("gamma", 1e-5, 2),
"min_child_weight": trial.suggest_int("min_child_weight", 1, 20),
"subsample": trial.suggest_float("subsample", 0, 1),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0, 1),
"reg_alpha": trial.suggest_float("reg_alpha", 0, 1),
"reg_lambda": trial.suggest_float("reg_lambda", 0, 1),
}
scores = []
optuna_model = XGBRegressor(**params)
cv = KFold(n_splits=10, random_state=100, shuffle=True)
for train_index, test_index in cv.split(train, y):
trainx, testx = train.iloc[train_index], train.iloc[test_index]
trainy, testy = y[train_index], y[test_index]
optuna_model.fit(trainx, trainy)
predy = optuna_model.predict(testx)
scores.append(mean_squared_error(testy, predy, squared=False))
return np.mean(scores)
study_xg = optuna.create_study(direction="minimize")
optuna.logging.set_verbosity(optuna.logging.WARNING)
n_trials = 50
with tqdm(total=n_trials) as pbar:
for i in range(n_trials):
study_xg.optimize(obj_xg, n_trials=1)
pbar.update(1)
# ### Optuna parameter tuning- Light GBM
def obj_light(trial):
prams = {
"max_depth": trial.suggest_int("max_depth", 1, 10),
"n_estimators": trial.suggest_int("n_estimators", 200, 1000),
"learning_rate": trial.suggest_float("learning_rate", 0.1, 1),
"min_child_weight": trial.suggest_int("min_child_weight", 1, 20),
"subsample": trial.suggest_float("subsample", 0, 1),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0, 1),
"reg_alpha": trial.suggest_float("reg_alpha", 0, 1),
"reg_lambda": trial.suggest_float("reg_lambda", 0, 1),
}
scores = []
optuna_model = LGBMRegressor(**params)
cv = KFold(n_splits=10, shuffle=True, random_state=100)
for train_index, test_index in cv.split(train, y):
trainx, testx = train.iloc[train_index], train.iloc[test_index]
trainy, testy = y[train_index], y[test_index]
optuna_model.fit(trainx, trainy)
predy = optuna_model.predict(testx)
scores.append(mean_squared_error(testy, predy, squared=False))
return np.mean(scores)
study_light = optuna.create_study(direction="minimize")
n_trials = 50
with tqdm(total=n_trials) as pbar:
for i in range(n_trials):
study_light.optimize(obj_light, n_trials=1)
pbar.update(1)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/073/129073930.ipynb
| null | null |
[{"Id": 129073930, "ScriptId": 38327225, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11722122, "CreationDate": "05/10/2023 19:18:45", "VersionNumber": 2.0, "Title": "Playground_S3_E14(Optuna - XGB, LGBM, CATBoost)", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 154.0, "LinesInsertedFromPrevious": 69.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 85.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
import optuna
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import KFold
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from catboost import CatBoostRegressor
from tqdm import tqdm
from sklearn.metrics import mean_squared_error
import os
for dirname, _, filenames in os.walk("/kaggle/input/playground-series-s3e14"):
for filename in filenames:
print(os.path.join(dirname, filename))
# # Competition Page
# https://www.kaggle.com/competitions/playground-series-s3e14
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
# ### Drop id from train and test set
train.drop(columns=["id"], inplace=True)
test.drop(columns=["id"], inplace=True)
# ### Checking for null and duplicates
train.isna().sum()
train.duplicated().sum()
train.drop_duplicates(inplace=True)
train.reset_index(drop=True, inplace=True)
for i in train.columns:
print(train[i].value_counts())
# ### Summary stats
train.describe()
# ### Co relation heat map
corr = train[["fruitset", "fruitmass", "seeds", "yield"]].corr()
corr.style.background_gradient(cmap="coolwarm")
# ### Plotting the continuous predictor variables
plt.figure(figsize=(10, 10))
for i in range(len(["fruitset", "fruitmass", "seeds"])):
plt.subplot(3, 3, i + 1)
sns.histplot(x=train[(["fruitset", "fruitmass", "seeds"])[i]], kde=True)
plt.tight_layout()
# ### Plotting the boxplot of continuous predictor variables
plt.figure(figsize=(10, 10))
for i in range(len(["fruitset", "fruitmass", "seeds"])):
plt.subplot(3, 3, i + 1)
sns.boxplot(y=train[(["fruitset", "fruitmass", "seeds"])[i]])
plt.tight_layout()
# ### Plotting the categorical variables
plt.figure(figsize=(12, 12))
for i in range(
len(
[
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
]
)
):
plt.subplot(5, 3, i + 1)
sns.countplot(
x=train[
(
[
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
]
)[i]
]
)
plt.tight_layout()
# ### Plotting the target variable
sns.histplot(x=train["yield"], kde=True)
plt.show()
# ### Separating features and target variable
y = train["yield"]
train.drop(columns=["yield"], inplace=True)
# ### Optuna parameter tuning- XG Boost
def obj_xg(trial):
params = {
"max_depth": trial.suggest_int("max_depth", 1, 10),
"learning_rate": trial.suggest_float("learning_rate", 0.1, 1),
"n_estimators": trial.suggest_int("n_estimators", 200, 1000),
"gamma": trial.suggest_float("gamma", 1e-5, 2),
"min_child_weight": trial.suggest_int("min_child_weight", 1, 20),
"subsample": trial.suggest_float("subsample", 0, 1),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0, 1),
"reg_alpha": trial.suggest_float("reg_alpha", 0, 1),
"reg_lambda": trial.suggest_float("reg_lambda", 0, 1),
}
scores = []
optuna_model = XGBRegressor(**params)
cv = KFold(n_splits=10, random_state=100, shuffle=True)
for train_index, test_index in cv.split(train, y):
trainx, testx = train.iloc[train_index], train.iloc[test_index]
trainy, testy = y[train_index], y[test_index]
optuna_model.fit(trainx, trainy)
predy = optuna_model.predict(testx)
scores.append(mean_squared_error(testy, predy, squared=False))
return np.mean(scores)
study_xg = optuna.create_study(direction="minimize")
optuna.logging.set_verbosity(optuna.logging.WARNING)
n_trials = 50
with tqdm(total=n_trials) as pbar:
for i in range(n_trials):
study_xg.optimize(obj_xg, n_trials=1)
pbar.update(1)
# ### Optuna parameter tuning- Light GBM
def obj_light(trial):
prams = {
"max_depth": trial.suggest_int("max_depth", 1, 10),
"n_estimators": trial.suggest_int("n_estimators", 200, 1000),
"learning_rate": trial.suggest_float("learning_rate", 0.1, 1),
"min_child_weight": trial.suggest_int("min_child_weight", 1, 20),
"subsample": trial.suggest_float("subsample", 0, 1),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0, 1),
"reg_alpha": trial.suggest_float("reg_alpha", 0, 1),
"reg_lambda": trial.suggest_float("reg_lambda", 0, 1),
}
scores = []
optuna_model = LGBMRegressor(**params)
cv = KFold(n_splits=10, shuffle=True, random_state=100)
for train_index, test_index in cv.split(train, y):
trainx, testx = train.iloc[train_index], train.iloc[test_index]
trainy, testy = y[train_index], y[test_index]
optuna_model.fit(trainx, trainy)
predy = optuna_model.predict(testx)
scores.append(mean_squared_error(testy, predy, squared=False))
return np.mean(scores)
study_light = optuna.create_study(direction="minimize")
n_trials = 50
with tqdm(total=n_trials) as pbar:
for i in range(n_trials):
study_light.optimize(obj_light, n_trials=1)
pbar.update(1)
| false | 0 | 1,830 | 0 | 1,830 | 1,830 |
||
129190024
|
<jupyter_start><jupyter_text>New Plant Diseases Dataset
**This dataset is recreated using offline augmentation from the original dataset. The original dataset can be found on [this][1] github repo. This dataset consists of about 87K rgb images of healthy and diseased crop leaves which is categorized into 38 different classes. The total dataset is divided into 80/20 ratio of training and validation set preserving the directory structure.
A new directory containing 33 test images is created later for prediction purpose.**
[1]: https://github.com/spMohanty/PlantVillage-Dataset
Kaggle dataset identifier: new-plant-diseases-dataset
<jupyter_script>import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
import os
image_size = 224
target_size = (image_size, image_size)
input_shape = (image_size, image_size, 3)
batch_size = 32
epochs = 25
base_dir = "../input/new-plant-diseases-dataset/new plant diseases dataset(augmented)/New Plant Diseases Dataset(Augmented)"
train_dir = os.path.join(base_dir, "train")
test_dir = os.path.join(base_dir, "valid")
train_datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1 / 255.0,
shear_range=0.2,
zoom_range=0.2,
width_shift_range=0.2,
height_shift_range=0.2,
fill_mode="nearest",
)
test_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1 / 255.0)
train_data = train_datagen.flow_from_directory(
train_dir,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode="categorical",
)
test_data = test_datagen.flow_from_directory(
test_dir,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode="categorical",
)
categories = list(train_data.class_indices.keys())
print(train_data.class_indices)
import json
with open("class_indices.json", "w") as f:
json.dump(train_data.class_indices, f)
from IPython.display import FileLink
FileLink(r"class_indices.json")
base_model = tf.keras.applications.MobileNet(
weights="imagenet", include_top=False, input_shape=input_shape
)
base_model.trainable = False
inputs = keras.Input(shape=input_shape)
x = base_model(inputs, training=False)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Dense(len(categories), activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=x, name="LeafDisease_MobileNet")
optimizer = tf.keras.optimizers.Adam()
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.CategoricalAccuracy(), "accuracy"],
)
history = model.fit(
train_data,
validation_data=test_data,
epochs=epochs,
steps_per_epoch=150,
validation_steps=100,
)
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(len(loss))
fig = plt.figure(figsize=(10, 6))
plt.plot(epochs, loss, c="red", label="Training")
plt.plot(epochs, val_loss, c="blue", label="Validation")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
acc = history.history["categorical_accuracy"]
val_acc = history.history["val_categorical_accuracy"]
epochs = range(len(acc))
fig = plt.figure(figsize=(10, 6))
plt.plot(epochs, acc, c="red", label="Training")
plt.plot(epochs, val_acc, c="blue", label="Validation")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
model.save("plant_disease")
import tensorflow as tf
# Convert the model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TFLite model
with open("model.tflite", "wb") as f:
f.write(tflite_model)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/190/129190024.ipynb
|
new-plant-diseases-dataset
|
vipoooool
|
[{"Id": 129190024, "ScriptId": 38388772, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8686594, "CreationDate": "05/11/2023 17:01:28", "VersionNumber": 1.0, "Title": "Plant disease classification using Mobilnet", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 140.0, "LinesInsertedFromPrevious": 140.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185016727, "KernelVersionId": 129190024, "SourceDatasetVersionId": 182633}]
|
[{"Id": 182633, "DatasetId": 78313, "DatasourceVersionId": 193494, "CreatorUserId": 2009285, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "11/18/2018 07:09:16", "VersionNumber": 2.0, "Title": "New Plant Diseases Dataset", "Slug": "new-plant-diseases-dataset", "Subtitle": "Image dataset containing different healthy and unhealthy crop leaves.", "Description": "**This dataset is recreated using offline augmentation from the original dataset. The original dataset can be found on [this][1] github repo. This dataset consists of about 87K rgb images of healthy and diseased crop leaves which is categorized into 38 different classes. The total dataset is divided into 80/20 ratio of training and validation set preserving the directory structure.\nA new directory containing 33 test images is created later for prediction purpose.**\n\n\n [1]: https://github.com/spMohanty/PlantVillage-Dataset", "VersionNotes": "New Test Images", "TotalCompressedBytes": 1445887779.0, "TotalUncompressedBytes": 1445887779.0}]
|
[{"Id": 78313, "CreatorUserId": 2009285, "OwnerUserId": 2009285.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 182633.0, "CurrentDatasourceVersionId": 193494.0, "ForumId": 87652, "Type": 2, "CreationDate": "11/16/2018 12:17:57", "LastActivityDate": "11/16/2018", "TotalViews": 387678, "TotalDownloads": 47287, "TotalVotes": 766, "TotalKernels": 244}]
|
[{"Id": 2009285, "UserName": "vipoooool", "DisplayName": "Samir Bhattarai", "RegisterDate": "06/21/2018", "PerformanceTier": 0}]
|
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
import os
image_size = 224
target_size = (image_size, image_size)
input_shape = (image_size, image_size, 3)
batch_size = 32
epochs = 25
base_dir = "../input/new-plant-diseases-dataset/new plant diseases dataset(augmented)/New Plant Diseases Dataset(Augmented)"
train_dir = os.path.join(base_dir, "train")
test_dir = os.path.join(base_dir, "valid")
train_datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1 / 255.0,
shear_range=0.2,
zoom_range=0.2,
width_shift_range=0.2,
height_shift_range=0.2,
fill_mode="nearest",
)
test_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1 / 255.0)
train_data = train_datagen.flow_from_directory(
train_dir,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode="categorical",
)
test_data = test_datagen.flow_from_directory(
test_dir,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode="categorical",
)
categories = list(train_data.class_indices.keys())
print(train_data.class_indices)
import json
with open("class_indices.json", "w") as f:
json.dump(train_data.class_indices, f)
from IPython.display import FileLink
FileLink(r"class_indices.json")
base_model = tf.keras.applications.MobileNet(
weights="imagenet", include_top=False, input_shape=input_shape
)
base_model.trainable = False
inputs = keras.Input(shape=input_shape)
x = base_model(inputs, training=False)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Dense(len(categories), activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=x, name="LeafDisease_MobileNet")
optimizer = tf.keras.optimizers.Adam()
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.CategoricalAccuracy(), "accuracy"],
)
history = model.fit(
train_data,
validation_data=test_data,
epochs=epochs,
steps_per_epoch=150,
validation_steps=100,
)
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(len(loss))
fig = plt.figure(figsize=(10, 6))
plt.plot(epochs, loss, c="red", label="Training")
plt.plot(epochs, val_loss, c="blue", label="Validation")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
acc = history.history["categorical_accuracy"]
val_acc = history.history["val_categorical_accuracy"]
epochs = range(len(acc))
fig = plt.figure(figsize=(10, 6))
plt.plot(epochs, acc, c="red", label="Training")
plt.plot(epochs, val_acc, c="blue", label="Validation")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
model.save("plant_disease")
import tensorflow as tf
# Convert the model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TFLite model
with open("model.tflite", "wb") as f:
f.write(tflite_model)
| false | 0 | 993 | 0 | 1,144 | 993 |
||
129190109
|
<jupyter_start><jupyter_text>Related Job Skills
The Job Skills Correlation dataset is a collection of data that provides insights into the relationships between different job skills and how they relate to each other. The dataset offers a valuable resource for researchers, policymakers, and employers interested in understanding the interdependencies between different job skills and their impact on job performance and success.
The dataset contains information about the correlation between different job skills, including technical skills, soft skills, and industry-specific skills. The dataset includes data for a wide range of occupations, from healthcare and technology to manufacturing and retail.
The dataset is particularly useful for researchers interested in understanding the skills required for different jobs and how these skills interact with each other. Policymakers can also use the dataset to develop strategies to promote skill development and training programs that take into account the interdependencies between different job skills.
Employers can also benefit from the dataset by identifying the skills that are most closely related to job success and performance in their industry. By understanding the correlations between different job skills, employers can develop more effective job training and recruitment programs that target the most relevant skills.
Overall, the Job Skills Correlation dataset is an essential resource for anyone interested in understanding the complex relationships between different job skills and their impact on job performance and success. By providing insights into the correlations between different job skills, the dataset can help individuals and organizations make more informed decisions about training, hiring, and career development.
Kaggle dataset identifier: related-job-skills
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
df = pd.read_csv("/kaggle/input/related-job-skills/related_skills.csv")
df.head()
def gen_corpus(df):
df["corpus"] = df["name"]
for col in df.columns[1:]:
df["corpus"] += " " + df[col]
return df["corpus"]
corpus = gen_corpus(df)
from nltk.stem.porter import PorterStemmer
ps = PorterStemmer()
import re
pattern = re.compile("\d+")
def preprocess1(text):
s = []
for word in text.split():
if word.lower() != "c" and len(word) == 1:
continue
word = re.sub(pattern, "", word)
# word = ps.stem(word.lower())
s.append(word.lower())
return " ".join(s)
corpus.dropna(inplace=True)
corpus = corpus.apply(preprocess1)
corpus
lines = []
for i in range(corpus.size):
lines.append(corpus.iloc[i])
corpus1 = []
from nltk import word_tokenize
for line in lines:
corpus1.append(word_tokenize(line))
import gensim
model = gensim.models.Word2Vec(window=10, workers=2, vector_size=50)
model.build_vocab(corpus1)
model.train(corpus1, total_examples=model.corpus_count, epochs=50)
model.wv.most_similar("backend")
model.wv.similarity("server", "backend")
l = [
"js",
"javascript",
"nodejs",
"reactjs",
"react",
"angularjs",
"angular",
"backend",
"server",
]
j = ["dsa", "java", "python", "programming"]
model.wv["sql"]
resume = ["java", "python", "ml"]
jd = ["css", "java", "html"]
count = 0
for key in jd:
for word in resume:
if model.wv.similarity(key, word) >= 0.5:
count += 1
break
print(count)
20 - -80
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/190/129190109.ipynb
|
related-job-skills
|
ulrikthygepedersen
|
[{"Id": 129190109, "ScriptId": 38403734, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11375904, "CreationDate": "05/11/2023 17:02:35", "VersionNumber": 1.0, "Title": "notebookfa4c43bc7f", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 90.0, "LinesInsertedFromPrevious": 90.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185016848, "KernelVersionId": 129190109, "SourceDatasetVersionId": 5086757}]
|
[{"Id": 5086757, "DatasetId": 2953696, "DatasourceVersionId": 5157696, "CreatorUserId": 9580496, "LicenseName": "Attribution 4.0 International (CC BY 4.0)", "CreationDate": "03/01/2023 09:31:17", "VersionNumber": 1.0, "Title": "Related Job Skills", "Slug": "related-job-skills", "Subtitle": "Can you forecast which job skills are highly related?", "Description": "The Job Skills Correlation dataset is a collection of data that provides insights into the relationships between different job skills and how they relate to each other. The dataset offers a valuable resource for researchers, policymakers, and employers interested in understanding the interdependencies between different job skills and their impact on job performance and success.\n\nThe dataset contains information about the correlation between different job skills, including technical skills, soft skills, and industry-specific skills. The dataset includes data for a wide range of occupations, from healthcare and technology to manufacturing and retail.\n\nThe dataset is particularly useful for researchers interested in understanding the skills required for different jobs and how these skills interact with each other. Policymakers can also use the dataset to develop strategies to promote skill development and training programs that take into account the interdependencies between different job skills.\n\nEmployers can also benefit from the dataset by identifying the skills that are most closely related to job success and performance in their industry. By understanding the correlations between different job skills, employers can develop more effective job training and recruitment programs that target the most relevant skills.\n\nOverall, the Job Skills Correlation dataset is an essential resource for anyone interested in understanding the complex relationships between different job skills and their impact on job performance and success. By providing insights into the correlations between different job skills, the dataset can help individuals and organizations make more informed decisions about training, hiring, and career development.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2953696, "CreatorUserId": 9580496, "OwnerUserId": 9580496.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5086757.0, "CurrentDatasourceVersionId": 5157696.0, "ForumId": 2991781, "Type": 2, "CreationDate": "03/01/2023 09:31:17", "LastActivityDate": "03/01/2023", "TotalViews": 413, "TotalDownloads": 48, "TotalVotes": 1, "TotalKernels": 2}]
|
[{"Id": 9580496, "UserName": "ulrikthygepedersen", "DisplayName": "Ulrik Thyge Pedersen", "RegisterDate": "02/04/2022", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
df = pd.read_csv("/kaggle/input/related-job-skills/related_skills.csv")
df.head()
def gen_corpus(df):
df["corpus"] = df["name"]
for col in df.columns[1:]:
df["corpus"] += " " + df[col]
return df["corpus"]
corpus = gen_corpus(df)
from nltk.stem.porter import PorterStemmer
ps = PorterStemmer()
import re
pattern = re.compile("\d+")
def preprocess1(text):
s = []
for word in text.split():
if word.lower() != "c" and len(word) == 1:
continue
word = re.sub(pattern, "", word)
# word = ps.stem(word.lower())
s.append(word.lower())
return " ".join(s)
corpus.dropna(inplace=True)
corpus = corpus.apply(preprocess1)
corpus
lines = []
for i in range(corpus.size):
lines.append(corpus.iloc[i])
corpus1 = []
from nltk import word_tokenize
for line in lines:
corpus1.append(word_tokenize(line))
import gensim
model = gensim.models.Word2Vec(window=10, workers=2, vector_size=50)
model.build_vocab(corpus1)
model.train(corpus1, total_examples=model.corpus_count, epochs=50)
model.wv.most_similar("backend")
model.wv.similarity("server", "backend")
l = [
"js",
"javascript",
"nodejs",
"reactjs",
"react",
"angularjs",
"angular",
"backend",
"server",
]
j = ["dsa", "java", "python", "programming"]
model.wv["sql"]
resume = ["java", "python", "ml"]
jd = ["css", "java", "html"]
count = 0
for key in jd:
for word in resume:
if model.wv.similarity(key, word) >= 0.5:
count += 1
break
print(count)
20 - -80
| false | 1 | 708 | 0 | 1,041 | 708 |
||
129097878
|
<jupyter_start><jupyter_text>MIAS-ROI-Mammography
Kaggle dataset identifier: mias-roi-mammography
<jupyter_script>#
# # Table of content
# [Read data: MINI-DDSM](#read-data)
# !pip install torchsummary
class clr:
# HEADER = '\033[95m'
# OKBLUE = '\033[94m'
# OKCYAN = '\033[96m'
# OKGREEN = '\033[92m'
# WARNING = '\033[93m'
# FAIL = '\033[91m'
# ENDC = '\033[0m'
# BOLD = '\033[1m'
# UNDERLINE = '\033[4m'
S = "\033[1;33m + \033[91m"
E = "\033[0m"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import cv2
import skimage.exposure as exposure
from glob import glob
from tqdm.notebook import tqdm
import time
from datetime import datetime
from IPython import display
import os
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
import torch.optim as optim
import torchvision.models as models
import torch.nn.functional as F
import pytorch_lightning as pl
# from torchsummary import summary
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import multiprocessing as mp
import warnings
warnings.filterwarnings("ignore")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
cores = mp.cpu_count()
plt.rcParams.update({"font.size": 10})
plt.rcParams["figure.figsize"] = (8, 6)
print(clr.S + "Cores:" + clr.E, cores)
print(clr.S + "Device:" + clr.E, device)
print(clr.S + "Day: " + clr.E, datetime.now())
#
# # Read Data
# [prev](#content) - [content](#content) - [next](#dataset)
class Read_data:
def __init__(self, path_csv):
self.name_dataset = path_csv.split("-")[-3:][0].split("/")[-1]
self.df = pd.read_csv(f"{path_csv}/description.csv")
self.df = self.df[["Cancer", "Path_save"]]
self.df["Path_save"] = self.df["Path_save"].apply(lambda x: f"{path_csv}/{x}")
def stats_cancer(self):
stats = self.df["Cancer"].value_counts()
plt.figure(figsize=plt.figaspect(1))
plt.pie(
x=list(stats.values),
labels=list(stats.index),
autopct=lambda p: "{:.2f}% ({:.0f})".format(
p, round((p / 100) * sum(list(stats.values)), 0)
),
)
plt.title(f"sample number statistics of {self.name_dataset}")
plt.show()
mias = Read_data("/kaggle/input/mias-roi-mammography")
mias.stats_cancer()
inbreast = Read_data("/kaggle/input/inbreast-roi-mammography")
inbreast.stats_cancer()
ddsm = Read_data("/kaggle/input/mini-ddsm-roi-mammography")
ddsm.stats_cancer()
cmmd = Read_data("/kaggle/input/cmmd-roi-mammography")
cmmd.stats_cancer()
data = pd.concat([mias.df, inbreast.df, ddsm.df, cmmd.df])
stats = data["Cancer"].value_counts()
plt.pie(
x=list(stats.values),
labels=list(stats.index),
autopct=lambda p: "{:.2f}% ({:.0f})".format(
p, round((p / 100) * sum(list(stats.values)), 0)
),
)
plt.title(f"sample number statistics of all data")
plt.show()
#
# # Split train - valid - test
# [prev](#read-data) - [content](#content) - [next](#dataset)
df_train, df_temp = train_test_split(data, test_size=0.2, random_state=42)
df_valid, df_test = train_test_split(df_temp, test_size=0.5, random_state=42)
#
# # Custom dataset
# [prev](#split-data) - [content](#content) - [next](#dataloader)
def pre_processing(img):
img = img.astype(np.uint8)
# apply clahe with cv2
# clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(8, 8))
# img1 = clahe.apply(img)
# apply clahe with skimage
img2 = exposure.equalize_adapthist(img, clip_limit=0.02)
# apply histogram equalization
# img3 = cv2.equalizeHist(img)
return img2
transforms = torchvision.transforms.Compose(
[
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5,), (0.5,)),
torchvision.transforms.Resize((512, 512)),
]
)
# path_img = df.Path.values[0]
# img = cv2.imread(path_img, 0)
# process = pre_processing(img)
# plt.imshow(process, cmap='gray')
# plt.axis('off')
class Dataset:
def __init__(self, df, transform=None):
self.df = df
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
row = self.df.iloc[idx]
img = cv2.imread(row["Path_save"], 0)
img = pre_processing(img)
label = row["Cancer"]
if self.transform:
img = self.transform(img)
label = torch.tensor(label)
return img, label
train_set = Dataset(df_train, transform=transforms)
valid_set = Dataset(df_valid, transform=transforms)
test_set = Dataset(df_test, transform=transforms)
def show_img(img, label):
if isinstance(img, torch.Tensor):
img = img.permute(1, 2, 0)
img = img.numpy()
if isinstance(label, torch.Tensor):
label = label.numpy()
plt.imshow(img, cmap="gray")
plt.title(f"Label: {label}")
plt.show()
show_img(*train_set[0])
#
# # Dataloader
# [prev](#dataset) - [content](#content) - [next](#model)
batch_size = 16
train_loader = DataLoader(
train_set, batch_size=batch_size, shuffle=True, num_workers=cores - 2
)
valid_loader = DataLoader(
valid_set, batch_size=batch_size, shuffle=False, num_workers=cores - 2
)
test_loader = DataLoader(
test_set, batch_size=batch_size, shuffle=False, num_workers=cores - 2
)
# Display image and label.
train_features, train_labels = next(iter(train_loader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.title(f"Label: {label}")
plt.show()
# plt.figure(figsize=(15, 15))
# for i, (img, label) in enumerate(train_set):
# plt.subplot(1,8,i+1)
# plt.imshow(img.squeeze(), cmap='gray')
# plt.axis('off')
# plt.subplots_adjust(wspace=None, hspace=None)
# plt.title(label)
# if i == 7:
# break
#
# # Model
# [prev](#dataloader) - [content](#content) - [next](#)
class Bottleneck(nn.Module):
expansion = (
3 # Number of output channels of the block relative to the input channels
)
def __init__(self, in_channels, out_channels, stride=1, downsample=None, width=32):
super().__init__()
self.conv1 = nn.Conv2d(in_channels, width, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(width)
self.conv2 = nn.Conv2d(
width, width, kernel_size=3, stride=stride, padding=1, bias=False
)
self.bn2 = nn.BatchNorm2d(width)
self.conv3 = nn.Conv2d(
width, out_channels * self.expansion, kernel_size=1, bias=False
)
self.bn3 = nn.BatchNorm2d(out_channels * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class HRNet(nn.Module):
def __init__(self, block, layers, num_classes=1, width=32):
super().__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0], stride=1, width=width)
self.layer2 = self._make_layer(block, 128, layers[1], stride=2, width=width)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2, width=width)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2, width=width)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def _make_layer(self, block, out_channels, blocks, stride=1, width=32):
downsample = None
if stride != 1 or self.in_channels != out_channels * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(
self.in_channels,
out_channels * block.expansion,
kernel_size=1,
stride=stride,
bias=False,
),
nn.BatchNorm2d(out_channels * block.expansion),
)
layers = []
layers.append(
block(self.in_channels, out_channels, stride, downsample, width=width)
)
self.in_channels = out_channels * block.expansion
for _ in range(1, blocks):
layers.append(block(self.in_channels, out_channels, width=width))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
x = torch.sigmoid(x)
return x
model = HRNet(Bottleneck, [3, 4, 23, 3]).to(device)
weight = torch.tensor([0.47, 0.53])
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.005)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs = inputs.type(torch.cuda.FloatTensor)
labels = labels.type(torch.cuda.FloatTensor)
inputs = inputs.to(device)
labels = labels.to(device)
print(inputs.shape)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f"[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}")
running_loss = 0.0
print("Finished Training")
for epoch in range(40):
running_loss = 0.0
running_corrects = 0
total_samples = 0
all_preds = []
all_labels = []
for i, inp in enumerate(train_set, 0):
inputs, labels = inp
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
preds = torch.sigmoid(
outputs
) # Áp dụng hàm sigmoid cho đầu ra để đưa vào BCELoss
# labels = labels.unsqueeze(1)
# labels = labels.float()
loss = criterion(outputs, labels) # Sử dụng BCELoss
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum((preds > 0.5).float() == labels)
total_samples += len(labels)
all_preds.extend(
(preds > 0.5).float().tolist()
) # Áp dreohreshold 0.5 để dự đoán nhãn nhị phân
all_labels.extend(labels.tolist())
if i % 10 == 5:
train_acc = running_corrects / total_samples
train_loss = running_loss / 100
print(
f"[Epoch {epoch + 1}, Batch {i}] loss: {train_loss:.3f}, acc: {train_acc:.3f}"
)
running_loss = 0.0
running_corrects = 0
total_samples = 0
# Calculate F1-score, recall, and precision
report = classification_report(all_labels, all_preds, output_dict=True)
f1_score = report["weighted avg"]["f1-score"]
recall = report["weighted avg"]["recall"]
precision = report["weighted avg"]["precision"]
# Reset running_corrects after calculating accuracy
running_corrects = 0
# Step 5: Save your trained model
torch.save(model.state_dict(), "/kaggle/working/model/HRnet_DDSM_Inbreast.pt")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097878.ipynb
|
mias-roi-mammography
|
quachnam
|
[{"Id": 129097878, "ScriptId": 37068459, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7848581, "CreationDate": "05/11/2023 02:09:12", "VersionNumber": 2.0, "Title": "Model CNN", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 389.0, "LinesInsertedFromPrevious": 193.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 196.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184852580, "KernelVersionId": 129097878, "SourceDatasetVersionId": 5623907}, {"Id": 184852581, "KernelVersionId": 129097878, "SourceDatasetVersionId": 5624085}, {"Id": 184852582, "KernelVersionId": 129097878, "SourceDatasetVersionId": 5624440}]
|
[{"Id": 5623907, "DatasetId": 3054622, "DatasourceVersionId": 5699110, "CreatorUserId": 7848581, "LicenseName": "Unknown", "CreationDate": "05/07/2023 07:24:12", "VersionNumber": 4.0, "Title": "MIAS-ROI-Mammography", "Slug": "mias-roi-mammography", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-05-07", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3054622, "CreatorUserId": 7848581, "OwnerUserId": 7848581.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5623907.0, "CurrentDatasourceVersionId": 5699110.0, "ForumId": 3117180, "Type": 2, "CreationDate": "03/27/2023 17:42:32", "LastActivityDate": "03/27/2023", "TotalViews": 384, "TotalDownloads": 73, "TotalVotes": 3, "TotalKernels": 14}]
|
[{"Id": 7848581, "UserName": "quachnam", "DisplayName": "Qx Nam", "RegisterDate": "07/06/2021", "PerformanceTier": 1}]
|
#
# # Table of content
# [Read data: MINI-DDSM](#read-data)
# !pip install torchsummary
class clr:
# HEADER = '\033[95m'
# OKBLUE = '\033[94m'
# OKCYAN = '\033[96m'
# OKGREEN = '\033[92m'
# WARNING = '\033[93m'
# FAIL = '\033[91m'
# ENDC = '\033[0m'
# BOLD = '\033[1m'
# UNDERLINE = '\033[4m'
S = "\033[1;33m + \033[91m"
E = "\033[0m"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import cv2
import skimage.exposure as exposure
from glob import glob
from tqdm.notebook import tqdm
import time
from datetime import datetime
from IPython import display
import os
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
import torch.optim as optim
import torchvision.models as models
import torch.nn.functional as F
import pytorch_lightning as pl
# from torchsummary import summary
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import multiprocessing as mp
import warnings
warnings.filterwarnings("ignore")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
cores = mp.cpu_count()
plt.rcParams.update({"font.size": 10})
plt.rcParams["figure.figsize"] = (8, 6)
print(clr.S + "Cores:" + clr.E, cores)
print(clr.S + "Device:" + clr.E, device)
print(clr.S + "Day: " + clr.E, datetime.now())
#
# # Read Data
# [prev](#content) - [content](#content) - [next](#dataset)
class Read_data:
def __init__(self, path_csv):
self.name_dataset = path_csv.split("-")[-3:][0].split("/")[-1]
self.df = pd.read_csv(f"{path_csv}/description.csv")
self.df = self.df[["Cancer", "Path_save"]]
self.df["Path_save"] = self.df["Path_save"].apply(lambda x: f"{path_csv}/{x}")
def stats_cancer(self):
stats = self.df["Cancer"].value_counts()
plt.figure(figsize=plt.figaspect(1))
plt.pie(
x=list(stats.values),
labels=list(stats.index),
autopct=lambda p: "{:.2f}% ({:.0f})".format(
p, round((p / 100) * sum(list(stats.values)), 0)
),
)
plt.title(f"sample number statistics of {self.name_dataset}")
plt.show()
mias = Read_data("/kaggle/input/mias-roi-mammography")
mias.stats_cancer()
inbreast = Read_data("/kaggle/input/inbreast-roi-mammography")
inbreast.stats_cancer()
ddsm = Read_data("/kaggle/input/mini-ddsm-roi-mammography")
ddsm.stats_cancer()
cmmd = Read_data("/kaggle/input/cmmd-roi-mammography")
cmmd.stats_cancer()
data = pd.concat([mias.df, inbreast.df, ddsm.df, cmmd.df])
stats = data["Cancer"].value_counts()
plt.pie(
x=list(stats.values),
labels=list(stats.index),
autopct=lambda p: "{:.2f}% ({:.0f})".format(
p, round((p / 100) * sum(list(stats.values)), 0)
),
)
plt.title(f"sample number statistics of all data")
plt.show()
#
# # Split train - valid - test
# [prev](#read-data) - [content](#content) - [next](#dataset)
df_train, df_temp = train_test_split(data, test_size=0.2, random_state=42)
df_valid, df_test = train_test_split(df_temp, test_size=0.5, random_state=42)
#
# # Custom dataset
# [prev](#split-data) - [content](#content) - [next](#dataloader)
def pre_processing(img):
img = img.astype(np.uint8)
# apply clahe with cv2
# clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(8, 8))
# img1 = clahe.apply(img)
# apply clahe with skimage
img2 = exposure.equalize_adapthist(img, clip_limit=0.02)
# apply histogram equalization
# img3 = cv2.equalizeHist(img)
return img2
transforms = torchvision.transforms.Compose(
[
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5,), (0.5,)),
torchvision.transforms.Resize((512, 512)),
]
)
# path_img = df.Path.values[0]
# img = cv2.imread(path_img, 0)
# process = pre_processing(img)
# plt.imshow(process, cmap='gray')
# plt.axis('off')
class Dataset:
def __init__(self, df, transform=None):
self.df = df
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
row = self.df.iloc[idx]
img = cv2.imread(row["Path_save"], 0)
img = pre_processing(img)
label = row["Cancer"]
if self.transform:
img = self.transform(img)
label = torch.tensor(label)
return img, label
train_set = Dataset(df_train, transform=transforms)
valid_set = Dataset(df_valid, transform=transforms)
test_set = Dataset(df_test, transform=transforms)
def show_img(img, label):
if isinstance(img, torch.Tensor):
img = img.permute(1, 2, 0)
img = img.numpy()
if isinstance(label, torch.Tensor):
label = label.numpy()
plt.imshow(img, cmap="gray")
plt.title(f"Label: {label}")
plt.show()
show_img(*train_set[0])
#
# # Dataloader
# [prev](#dataset) - [content](#content) - [next](#model)
batch_size = 16
train_loader = DataLoader(
train_set, batch_size=batch_size, shuffle=True, num_workers=cores - 2
)
valid_loader = DataLoader(
valid_set, batch_size=batch_size, shuffle=False, num_workers=cores - 2
)
test_loader = DataLoader(
test_set, batch_size=batch_size, shuffle=False, num_workers=cores - 2
)
# Display image and label.
train_features, train_labels = next(iter(train_loader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.title(f"Label: {label}")
plt.show()
# plt.figure(figsize=(15, 15))
# for i, (img, label) in enumerate(train_set):
# plt.subplot(1,8,i+1)
# plt.imshow(img.squeeze(), cmap='gray')
# plt.axis('off')
# plt.subplots_adjust(wspace=None, hspace=None)
# plt.title(label)
# if i == 7:
# break
#
# # Model
# [prev](#dataloader) - [content](#content) - [next](#)
class Bottleneck(nn.Module):
expansion = (
3 # Number of output channels of the block relative to the input channels
)
def __init__(self, in_channels, out_channels, stride=1, downsample=None, width=32):
super().__init__()
self.conv1 = nn.Conv2d(in_channels, width, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(width)
self.conv2 = nn.Conv2d(
width, width, kernel_size=3, stride=stride, padding=1, bias=False
)
self.bn2 = nn.BatchNorm2d(width)
self.conv3 = nn.Conv2d(
width, out_channels * self.expansion, kernel_size=1, bias=False
)
self.bn3 = nn.BatchNorm2d(out_channels * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class HRNet(nn.Module):
def __init__(self, block, layers, num_classes=1, width=32):
super().__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0], stride=1, width=width)
self.layer2 = self._make_layer(block, 128, layers[1], stride=2, width=width)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2, width=width)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2, width=width)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def _make_layer(self, block, out_channels, blocks, stride=1, width=32):
downsample = None
if stride != 1 or self.in_channels != out_channels * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(
self.in_channels,
out_channels * block.expansion,
kernel_size=1,
stride=stride,
bias=False,
),
nn.BatchNorm2d(out_channels * block.expansion),
)
layers = []
layers.append(
block(self.in_channels, out_channels, stride, downsample, width=width)
)
self.in_channels = out_channels * block.expansion
for _ in range(1, blocks):
layers.append(block(self.in_channels, out_channels, width=width))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
x = torch.sigmoid(x)
return x
model = HRNet(Bottleneck, [3, 4, 23, 3]).to(device)
weight = torch.tensor([0.47, 0.53])
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.005)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs = inputs.type(torch.cuda.FloatTensor)
labels = labels.type(torch.cuda.FloatTensor)
inputs = inputs.to(device)
labels = labels.to(device)
print(inputs.shape)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f"[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}")
running_loss = 0.0
print("Finished Training")
for epoch in range(40):
running_loss = 0.0
running_corrects = 0
total_samples = 0
all_preds = []
all_labels = []
for i, inp in enumerate(train_set, 0):
inputs, labels = inp
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
preds = torch.sigmoid(
outputs
) # Áp dụng hàm sigmoid cho đầu ra để đưa vào BCELoss
# labels = labels.unsqueeze(1)
# labels = labels.float()
loss = criterion(outputs, labels) # Sử dụng BCELoss
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum((preds > 0.5).float() == labels)
total_samples += len(labels)
all_preds.extend(
(preds > 0.5).float().tolist()
) # Áp dreohreshold 0.5 để dự đoán nhãn nhị phân
all_labels.extend(labels.tolist())
if i % 10 == 5:
train_acc = running_corrects / total_samples
train_loss = running_loss / 100
print(
f"[Epoch {epoch + 1}, Batch {i}] loss: {train_loss:.3f}, acc: {train_acc:.3f}"
)
running_loss = 0.0
running_corrects = 0
total_samples = 0
# Calculate F1-score, recall, and precision
report = classification_report(all_labels, all_preds, output_dict=True)
f1_score = report["weighted avg"]["f1-score"]
recall = report["weighted avg"]["recall"]
precision = report["weighted avg"]["precision"]
# Reset running_corrects after calculating accuracy
running_corrects = 0
# Step 5: Save your trained model
torch.save(model.state_dict(), "/kaggle/working/model/HRnet_DDSM_Inbreast.pt")
| false | 0 | 3,878 | 0 | 3,908 | 3,878 |
||
129097310
|
<jupyter_start><jupyter_text>Mall_Customers
Kaggle dataset identifier: mall-customers
<jupyter_code>import pandas as pd
df = pd.read_csv('mall-customers/Mall_Customers.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CustomerID 200 non-null int64
1 Genre 200 non-null object
2 Age 200 non-null int64
3 Annual Income (k$) 200 non-null int64
4 Spending Score (1-100) 200 non-null int64
dtypes: int64(4), object(1)
memory usage: 7.9+ KB
<jupyter_text>Examples:
{
"CustomerID": 1,
"Genre": "Male",
"Age": 19,
"Annual Income (k$)": 15,
"Spending Score (1-100)": 39
}
{
"CustomerID": 2,
"Genre": "Male",
"Age": 21,
"Annual Income (k$)": 15,
"Spending Score (1-100)": 81
}
{
"CustomerID": 3,
"Genre": "Female",
"Age": 20,
"Annual Income (k$)": 16,
"Spending Score (1-100)": 6
}
{
"CustomerID": 4,
"Genre": "Female",
"Age": 23,
"Annual Income (k$)": 16,
"Spending Score (1-100)": 77
}
<jupyter_script>from pycaret.utils import enable_colab
enable_colab()
import pandas as pd
dataset = pd.read_csv("/kaggle/input/mall-customers/Mall_Customers.csv")
dataset.head()
dataset.shape
data = dataset.sample(frac=0.95, random_state=786)
data_unseen = dataset.drop(data.index)
data.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print("Data for Modeling: " + str(data.shape))
print("Unseen Data For Predictions: " + str(data_unseen.shape))
from pycaret.clustering import *
exp_clu101 = setup(data, normalize=True, ignore_features=["CUST_ID"], session_id=123)
kmeans = create_model("kmeans")
print(kmeans)
models()
Agglo = create_model("hclust", num_clusters=4)
print(Agglo)
kmean_results = assign_model(kmeans)
kmean_results.head()
plot_model(kmeans)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097310.ipynb
|
mall-customers
|
shwetabh123
|
[{"Id": 129097310, "ScriptId": 38377779, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8292053, "CreationDate": "05/11/2023 02:00:39", "VersionNumber": 1.0, "Title": "notebook1abaa597c8", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 42.0, "LinesInsertedFromPrevious": 42.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184851493, "KernelVersionId": 129097310, "SourceDatasetVersionId": 10938}]
|
[{"Id": 10938, "DatasetId": 7721, "DatasourceVersionId": 10938, "CreatorUserId": 1508014, "LicenseName": "CC0: Public Domain", "CreationDate": "12/23/2017 06:12:40", "VersionNumber": 1.0, "Title": "Mall_Customers", "Slug": "mall-customers", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 4286.0, "TotalUncompressedBytes": 4286.0}]
|
[{"Id": 7721, "CreatorUserId": 1508014, "OwnerUserId": 1508014.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 10938.0, "CurrentDatasourceVersionId": 10938.0, "ForumId": 14591, "Type": 2, "CreationDate": "12/23/2017 06:12:40", "LastActivityDate": "02/01/2018", "TotalViews": 246607, "TotalDownloads": 43908, "TotalVotes": 192, "TotalKernels": 140}]
|
[{"Id": 1508014, "UserName": "shwetabh123", "DisplayName": "shwetabh123", "RegisterDate": "12/20/2017", "PerformanceTier": 1}]
|
from pycaret.utils import enable_colab
enable_colab()
import pandas as pd
dataset = pd.read_csv("/kaggle/input/mall-customers/Mall_Customers.csv")
dataset.head()
dataset.shape
data = dataset.sample(frac=0.95, random_state=786)
data_unseen = dataset.drop(data.index)
data.reset_index(drop=True, inplace=True)
data_unseen.reset_index(drop=True, inplace=True)
print("Data for Modeling: " + str(data.shape))
print("Unseen Data For Predictions: " + str(data_unseen.shape))
from pycaret.clustering import *
exp_clu101 = setup(data, normalize=True, ignore_features=["CUST_ID"], session_id=123)
kmeans = create_model("kmeans")
print(kmeans)
models()
Agglo = create_model("hclust", num_clusters=4)
print(Agglo)
kmean_results = assign_model(kmeans)
kmean_results.head()
plot_model(kmeans)
|
[{"mall-customers/Mall_Customers.csv": {"column_names": "[\"CustomerID\", \"Genre\", \"Age\", \"Annual Income (k$)\", \"Spending Score (1-100)\"]", "column_data_types": "{\"CustomerID\": \"int64\", \"Genre\": \"object\", \"Age\": \"int64\", \"Annual Income (k$)\": \"int64\", \"Spending Score (1-100)\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 200 entries, 0 to 199\nData columns (total 5 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CustomerID 200 non-null int64 \n 1 Genre 200 non-null object\n 2 Age 200 non-null int64 \n 3 Annual Income (k$) 200 non-null int64 \n 4 Spending Score (1-100) 200 non-null int64 \ndtypes: int64(4), object(1)\nmemory usage: 7.9+ KB\n", "summary": "{\"CustomerID\": {\"count\": 200.0, \"mean\": 100.5, \"std\": 57.879184513951124, \"min\": 1.0, \"25%\": 50.75, \"50%\": 100.5, \"75%\": 150.25, \"max\": 200.0}, \"Age\": {\"count\": 200.0, \"mean\": 38.85, \"std\": 13.96900733155888, \"min\": 18.0, \"25%\": 28.75, \"50%\": 36.0, \"75%\": 49.0, \"max\": 70.0}, \"Annual Income (k$)\": {\"count\": 200.0, \"mean\": 60.56, \"std\": 26.264721165271244, \"min\": 15.0, \"25%\": 41.5, \"50%\": 61.5, \"75%\": 78.0, \"max\": 137.0}, \"Spending Score (1-100)\": {\"count\": 200.0, \"mean\": 50.2, \"std\": 25.823521668370173, \"min\": 1.0, \"25%\": 34.75, \"50%\": 50.0, \"75%\": 73.0, \"max\": 99.0}}", "examples": "{\"CustomerID\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"Genre\":{\"0\":\"Male\",\"1\":\"Male\",\"2\":\"Female\",\"3\":\"Female\"},\"Age\":{\"0\":19,\"1\":21,\"2\":20,\"3\":23},\"Annual Income (k$)\":{\"0\":15,\"1\":15,\"2\":16,\"3\":16},\"Spending Score (1-100)\":{\"0\":39,\"1\":81,\"2\":6,\"3\":77}}"}}]
| true | 1 |
<start_data_description><data_path>mall-customers/Mall_Customers.csv:
<column_names>
['CustomerID', 'Genre', 'Age', 'Annual Income (k$)', 'Spending Score (1-100)']
<column_types>
{'CustomerID': 'int64', 'Genre': 'object', 'Age': 'int64', 'Annual Income (k$)': 'int64', 'Spending Score (1-100)': 'int64'}
<dataframe_Summary>
{'CustomerID': {'count': 200.0, 'mean': 100.5, 'std': 57.879184513951124, 'min': 1.0, '25%': 50.75, '50%': 100.5, '75%': 150.25, 'max': 200.0}, 'Age': {'count': 200.0, 'mean': 38.85, 'std': 13.96900733155888, 'min': 18.0, '25%': 28.75, '50%': 36.0, '75%': 49.0, 'max': 70.0}, 'Annual Income (k$)': {'count': 200.0, 'mean': 60.56, 'std': 26.264721165271244, 'min': 15.0, '25%': 41.5, '50%': 61.5, '75%': 78.0, 'max': 137.0}, 'Spending Score (1-100)': {'count': 200.0, 'mean': 50.2, 'std': 25.823521668370173, 'min': 1.0, '25%': 34.75, '50%': 50.0, '75%': 73.0, 'max': 99.0}}
<dataframe_info>
RangeIndex: 200 entries, 0 to 199
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CustomerID 200 non-null int64
1 Genre 200 non-null object
2 Age 200 non-null int64
3 Annual Income (k$) 200 non-null int64
4 Spending Score (1-100) 200 non-null int64
dtypes: int64(4), object(1)
memory usage: 7.9+ KB
<some_examples>
{'CustomerID': {'0': 1, '1': 2, '2': 3, '3': 4}, 'Genre': {'0': 'Male', '1': 'Male', '2': 'Female', '3': 'Female'}, 'Age': {'0': 19, '1': 21, '2': 20, '3': 23}, 'Annual Income (k$)': {'0': 15, '1': 15, '2': 16, '3': 16}, 'Spending Score (1-100)': {'0': 39, '1': 81, '2': 6, '3': 77}}
<end_description>
| 279 | 0 | 734 | 279 |
129097246
|
# # Convolutional Neural Networks (CNN)
# Content:
#
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sn
sn.set(font_scale=1.4)
import matplotlib.pyplot as plt
from tqdm import tqdm
import cv2
from sklearn.utils import shuffle
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.utils import to_categorical
from keras.optimizers import RMSprop
from keras.callbacks import ReduceLROnPlateau
from sklearn.metrics import confusion_matrix
import os
print(os.listdir("/kaggle/input/motion-planning-datasets-master/mp"))
# Any results you write to the current directory are saved as output.
class_names = [
"alternating_gaps",
"bugtrap_forest",
"forest",
"gaps_and_forest",
"mazes",
"multiple_bugtraps",
"shifting_gaps",
"single_bugtrap",
]
class_names_label = {class_name: i for i, class_name in enumerate(class_names)}
num_classes = len(class_names)
IMAGE_SIZE = (210, 210)
# ## Loading the Data
# * In this part I am loading the data
#
def load_data():
datasets = [
"/kaggle/input/motion-planning-datasets-master/mp/train",
"/kaggle/input/motion-planning-datasets-master/mp/test",
]
output = []
# Iterate through training and test sets
for dataset in datasets:
images = []
labels = []
print("Loading {}".format(dataset))
# Iterate through each folder corresponding to a category
for folder in os.listdir(dataset):
label = class_names_label[folder]
# Iterate through each image in the folder
for file in tqdm(os.listdir(os.path.join(dataset, folder))):
# Get the path name of the image
img_path = os.path.join(os.path.join(dataset, folder), file)
# Open and resize the img
img = cv2.imread(img_path)
# Append the image and its corresponding label to the output
images.append(img)
labels.append(label)
images = np.array(images, dtype="float32")
labels = np.array(labels, dtype="int32")
output.append((images, labels))
return output
(train_images, train_labels), (test_images, test_labels) = load_data()
train_images, train_labels = shuffle(train_images, train_labels, random_state=25)
# ## Exploring the data
# - How many training and testing examples do we have?
# - What is the size of the image
# - What is the proportion of each class
#
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]
print("Number of training examples: {}".format(n_train))
print("Number of testing examples: {}".format(n_test))
print("Each image is of size: {}".format(IMAGE_SIZE))
_, train_counts = np.unique(train_labels, return_counts=True)
_, test_counts = np.unique(test_labels, return_counts=True)
pd.DataFrame({"train": train_counts, "test": test_counts}, index=class_names).plot.bar()
plt.show()
# ## Normalizing the Data
# Scales down the pixel number from 0 to 255 to being 0 or 1. This will increase the models performance.
train_images = train_images / 255.0
test_images = test_images / 255.0
# ## Visualizing the Data
# Displays images from the dataset.
def display_random_image(class_names, images, labels):
index = np.random.randint(images.shape[0])
plt.figure()
plt.imshow(images[index])
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.title("Image #{} : ".format(index) + class_names[labels[index]])
plt.show()
return index
display_random_image(class_names, train_images, train_labels)
# Displays first 25 images from dataset for a better view.
def display_examples(class_names, images, labels):
fig = plt.figure(figsize=(10, 10))
fig.suptitle("Some examples of images of the dataset", fontsize=16)
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[labels[i]])
plt.show()
display_examples(class_names, train_images, train_labels)
# ## CNN
# 1. Build the model
# 
# - Feature detector is a 5x5 matrix
# 
# - relu: turns neg numbers into 0s to make it linear
# 
# - MaxPooling: reduces overfitting, 2x2 matrix
# 
# - Dropout: drops a percent of input units in a layer
# 
# - Flatten: makes 2D tensor 1D
# 
# - Full Connection: All neurons in layer are connected to all the neurons in prev layer
# 
#
model = Sequential()
model.add(Conv2D(32, (5, 5), activation="relu", input_shape=(201, 201, 3)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
# create NN
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dense(num_classes, activation="softmax"))
# 2. Compile the model
# We can use the parameters:
# - optimizer: adam = RMSProp + Momentum
# - Momentum = takes into account past gradiwnt to have a better update
# - RMSProp = exponentially weighted average of the squares of past gradients
# - Loss Function: I use categorical crosstropy for classification, each image belongs to only one class
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# 3. Train/fit the data to the model
history = model.fit(
train_images,
train_labels,
batch_size=200,
epochs=10,
validation_data=(test_images, test_labels),
verbose=1,
)
def plot_accuracy(history):
fig = plt.figure(figsize=(10, 5))
# plot accuracy
plt.plot(history.history["accuracy"], "bo--", label="accuracy")
plt.plot(history.history["val_accuracy"], "ro--", label="val_accuracy")
plt.title("train_acc vs val_acc")
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.legend()
plt.show()
def plot_loss(history):
fig = plt.figure(figsize=(10, 5))
# Plot loss function
plt.plot(history.history["loss"], "bo--", label="loss")
plt.plot(history.history["val_loss"], "ro--", label="val_loss")
plt.title("train_loss vs val_loss")
plt.ylabel("loss")
plt.xlabel("epochs")
plt.legend()
plt.show()
plot_accuracy(history)
plot_loss(history)
# 4. Evaluate model on the test set
test_loss = model.evaluate(test_images, test_labels)
print("The error is: %.2f%%" % (100 - test_loss[1] * 100))
# Testing on a Random Image
predictions = model.predict(test_images)
pred_labels = np.argmax(predictions, axis=1)
display_random_image(class_names, test_images, pred_labels)
# 5. Error Analysis
# Analyze to see what images the classifier has trouble with
def print_mislabeled_images(class_names, test_images, test_labels, pred_labels):
bool_arr = []
i = 0
for pred in pred_labels:
arr = test_labels[i]
if arr[pred] == 0:
bool_arr.append(False)
else:
bool_arr.append(True)
i = i + 1
mislabeled_indices = []
i = 0
for b in bool_arr:
if b == False:
mislabeled_indices.append(i)
i = i + 1
mislabeled_images = test_images[mislabeled_indices]
mislabeled_labels = pred_labels[mislabeled_indices]
title = "Some examples of mislabeled images by the classifier:"
display_examples(class_names, mislabeled_images, mislabeled_labels)
print_mislabeled_images(class_names, test_images, test_labels, pred_labels)
tlabels = []
for test in test_labels:
i = 0
for t in test:
if t == 1:
tlabels.append(i)
i = i + 1
con_mat = confusion_matrix(tlabels, pred_labels)
ax = plt.axes()
sn.heatmap(
con_mat,
annot=True,
annot_kws={"size": 10},
xticklabels=class_names,
yticklabels=class_names,
ax=ax,
)
ax.set_title("Confusion matrix")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097246.ipynb
| null | null |
[{"Id": 129097246, "ScriptId": 38375724, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13632161, "CreationDate": "05/11/2023 01:59:32", "VersionNumber": 2.0, "Title": "CNN for map planning data set", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 303.0, "LinesInsertedFromPrevious": 3.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 300.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Convolutional Neural Networks (CNN)
# Content:
#
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sn
sn.set(font_scale=1.4)
import matplotlib.pyplot as plt
from tqdm import tqdm
import cv2
from sklearn.utils import shuffle
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.utils import to_categorical
from keras.optimizers import RMSprop
from keras.callbacks import ReduceLROnPlateau
from sklearn.metrics import confusion_matrix
import os
print(os.listdir("/kaggle/input/motion-planning-datasets-master/mp"))
# Any results you write to the current directory are saved as output.
class_names = [
"alternating_gaps",
"bugtrap_forest",
"forest",
"gaps_and_forest",
"mazes",
"multiple_bugtraps",
"shifting_gaps",
"single_bugtrap",
]
class_names_label = {class_name: i for i, class_name in enumerate(class_names)}
num_classes = len(class_names)
IMAGE_SIZE = (210, 210)
# ## Loading the Data
# * In this part I am loading the data
#
def load_data():
datasets = [
"/kaggle/input/motion-planning-datasets-master/mp/train",
"/kaggle/input/motion-planning-datasets-master/mp/test",
]
output = []
# Iterate through training and test sets
for dataset in datasets:
images = []
labels = []
print("Loading {}".format(dataset))
# Iterate through each folder corresponding to a category
for folder in os.listdir(dataset):
label = class_names_label[folder]
# Iterate through each image in the folder
for file in tqdm(os.listdir(os.path.join(dataset, folder))):
# Get the path name of the image
img_path = os.path.join(os.path.join(dataset, folder), file)
# Open and resize the img
img = cv2.imread(img_path)
# Append the image and its corresponding label to the output
images.append(img)
labels.append(label)
images = np.array(images, dtype="float32")
labels = np.array(labels, dtype="int32")
output.append((images, labels))
return output
(train_images, train_labels), (test_images, test_labels) = load_data()
train_images, train_labels = shuffle(train_images, train_labels, random_state=25)
# ## Exploring the data
# - How many training and testing examples do we have?
# - What is the size of the image
# - What is the proportion of each class
#
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]
print("Number of training examples: {}".format(n_train))
print("Number of testing examples: {}".format(n_test))
print("Each image is of size: {}".format(IMAGE_SIZE))
_, train_counts = np.unique(train_labels, return_counts=True)
_, test_counts = np.unique(test_labels, return_counts=True)
pd.DataFrame({"train": train_counts, "test": test_counts}, index=class_names).plot.bar()
plt.show()
# ## Normalizing the Data
# Scales down the pixel number from 0 to 255 to being 0 or 1. This will increase the models performance.
train_images = train_images / 255.0
test_images = test_images / 255.0
# ## Visualizing the Data
# Displays images from the dataset.
def display_random_image(class_names, images, labels):
index = np.random.randint(images.shape[0])
plt.figure()
plt.imshow(images[index])
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.title("Image #{} : ".format(index) + class_names[labels[index]])
plt.show()
return index
display_random_image(class_names, train_images, train_labels)
# Displays first 25 images from dataset for a better view.
def display_examples(class_names, images, labels):
fig = plt.figure(figsize=(10, 10))
fig.suptitle("Some examples of images of the dataset", fontsize=16)
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[labels[i]])
plt.show()
display_examples(class_names, train_images, train_labels)
# ## CNN
# 1. Build the model
# 
# - Feature detector is a 5x5 matrix
# 
# - relu: turns neg numbers into 0s to make it linear
# 
# - MaxPooling: reduces overfitting, 2x2 matrix
# 
# - Dropout: drops a percent of input units in a layer
# 
# - Flatten: makes 2D tensor 1D
# 
# - Full Connection: All neurons in layer are connected to all the neurons in prev layer
# 
#
model = Sequential()
model.add(Conv2D(32, (5, 5), activation="relu", input_shape=(201, 201, 3)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
# create NN
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dense(num_classes, activation="softmax"))
# 2. Compile the model
# We can use the parameters:
# - optimizer: adam = RMSProp + Momentum
# - Momentum = takes into account past gradiwnt to have a better update
# - RMSProp = exponentially weighted average of the squares of past gradients
# - Loss Function: I use categorical crosstropy for classification, each image belongs to only one class
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# 3. Train/fit the data to the model
history = model.fit(
train_images,
train_labels,
batch_size=200,
epochs=10,
validation_data=(test_images, test_labels),
verbose=1,
)
def plot_accuracy(history):
fig = plt.figure(figsize=(10, 5))
# plot accuracy
plt.plot(history.history["accuracy"], "bo--", label="accuracy")
plt.plot(history.history["val_accuracy"], "ro--", label="val_accuracy")
plt.title("train_acc vs val_acc")
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.legend()
plt.show()
def plot_loss(history):
fig = plt.figure(figsize=(10, 5))
# Plot loss function
plt.plot(history.history["loss"], "bo--", label="loss")
plt.plot(history.history["val_loss"], "ro--", label="val_loss")
plt.title("train_loss vs val_loss")
plt.ylabel("loss")
plt.xlabel("epochs")
plt.legend()
plt.show()
plot_accuracy(history)
plot_loss(history)
# 4. Evaluate model on the test set
test_loss = model.evaluate(test_images, test_labels)
print("The error is: %.2f%%" % (100 - test_loss[1] * 100))
# Testing on a Random Image
predictions = model.predict(test_images)
pred_labels = np.argmax(predictions, axis=1)
display_random_image(class_names, test_images, pred_labels)
# 5. Error Analysis
# Analyze to see what images the classifier has trouble with
def print_mislabeled_images(class_names, test_images, test_labels, pred_labels):
bool_arr = []
i = 0
for pred in pred_labels:
arr = test_labels[i]
if arr[pred] == 0:
bool_arr.append(False)
else:
bool_arr.append(True)
i = i + 1
mislabeled_indices = []
i = 0
for b in bool_arr:
if b == False:
mislabeled_indices.append(i)
i = i + 1
mislabeled_images = test_images[mislabeled_indices]
mislabeled_labels = pred_labels[mislabeled_indices]
title = "Some examples of mislabeled images by the classifier:"
display_examples(class_names, mislabeled_images, mislabeled_labels)
print_mislabeled_images(class_names, test_images, test_labels, pred_labels)
tlabels = []
for test in test_labels:
i = 0
for t in test:
if t == 1:
tlabels.append(i)
i = i + 1
con_mat = confusion_matrix(tlabels, pred_labels)
ax = plt.axes()
sn.heatmap(
con_mat,
annot=True,
annot_kws={"size": 10},
xticklabels=class_names,
yticklabels=class_names,
ax=ax,
)
ax.set_title("Confusion matrix")
plt.show()
| false | 0 | 2,469 | 0 | 2,469 | 2,469 |
||
129097235
|
import random
import pandas as pd
def encrypt(plaintext, key):
ciphertext = ""
for char in plaintext:
if char.isalpha():
char = char.lower()
index = ord(char) - ord("a")
char = key[index]
ciphertext += char
return ciphertext
def generate_key():
alphabet = list("abcdefghijklmnopqrstuvwxyz")
random.shuffle(alphabet)
return "".join(alphabet)
# Set the plaintext message
plaintext = pd.read_fwf("/kaggle/input/text1234/text1.txt")
# Remove punctuation, special characters, and spaces, and convert to lowercase
plaintext = "".join(filter(str.isalpha, plaintext)).lower()
# Generate a random key
key = generate_key()
# Encrypt the plaintext using the key
ciphertext = encrypt(plaintext, key)
print("Plaintext: ", plaintext)
print("Key: ", key)
print("Ciphertext: ", ciphertext)
import numpy as np
def generate_digraph_frequency_matrix(text):
matrix_size = 26
digraph_counts = np.zeros((matrix_size, matrix_size), dtype=int)
# Count digraph occurrences in the text
for i in range(len(text) - 1):
current_char = text[i]
next_char = text[i + 1]
if current_char.isalpha() and next_char.isalpha():
current_index = ord(current_char.lower()) - ord("a")
next_index = ord(next_char.lower()) - ord("a")
digraph_counts[current_index][next_index] += 1
# Add five to each element in the matrix
digraph_counts += 5
# Normalize the matrix by dividing each element by its row sum
row_sums = digraph_counts.sum(axis=1)
matrix = digraph_counts / row_sums[:, np.newaxis]
return matrix
# Set the English text to generate the digraph frequency matrix
text = "Your English text goes here..."
# Remove punctuation, special characters, and spaces, and convert to lowercase
text = "".join(filter(str.isalpha, text)).lower()
# Generate the digraph frequency matrix
digraph_matrix = generate_digraph_frequency_matrix(text)
# Print the digraph frequency matrix
print(digraph_matrix)
import numpy as np
def initialize_hmm(N, M):
# Initialize transition matrix A
A = digraph_matrix.copy()
# Initialize emission matrix B
B = np.random.rand(N, M)
B /= B.sum(axis=1, keepdims=True)
# Initialize initial state distribution pi
pi = np.random.rand(N)
pi /= pi.sum()
# Print the trained HMM parameters
return A, B, pi
def forward_backward(ciphertext, A, B, pi):
T = len(ciphertext)
N, M = B.shape
# Initialize forward and backward variables
forward = np.zeros((T, N))
backward = np.zeros((T, N))
# Compute forward variables
forward[0] = pi * B[:, ord(ciphertext[0].lower()) - ord("a")]
for t in range(1, T):
forward[t] = (
np.dot(forward[t - 1], A) * B[:, ord(ciphertext[t].lower()) - ord("a")]
)
# Compute backward variables
backward[-1] = 1
for t in range(T - 2, -1, -1):
backward[t] = np.dot(
A, B[:, ord(ciphertext[t + 1].lower()) - ord("a")] * backward[t + 1]
)
# Compute the scaled forward and backward variables
scale = np.sum(forward[-1])
forward /= scale
backward /= scale
return forward, backward
def baum_welch(ciphertext, A, B, pi, num_iterations):
T = len(ciphertext)
N, M = B.shape
for iteration in range(num_iterations):
# Expectation step
forward, backward = forward_backward(ciphertext, A, B, pi)
xi = np.zeros((T - 1, N, N))
for t in range(T - 1):
numerator = (
forward[t][:, np.newaxis]
* A
* B[:, ord(ciphertext[t + 1].lower()) - ord("a")].reshape((-1, 1))
* backward[t + 1]
)
denominator = np.sum(np.sum(numerator, axis=0), axis=0)
xi[t] = numerator / denominator
gamma = forward * backward
# Maximization step
A = np.sum(xi, axis=0) / np.sum(gamma[:-1], axis=0)[:, np.newaxis]
pi = gamma[0]
for t in range(T):
for j in range(M):
if ciphertext[t] == chr(ord("a") + j):
B[:, j] = np.sum(gamma[:, t], axis=1) / np.sum(gamma, axis=1)
return A, B, pi
A = digraph_matrix.copy()
B = np.random.rand(N, M)
B /= B.sum(axis=1, keepdims=True)
pi = np.random.rand(N)
pi /= pi.sum()
print("Transition matrix A:")
print(A)
print("\nEmission matrix B:")
print(B)
print("\nInitial state distribution pi:")
print(pi)
# Set the actual key
actual_key = "bcdefghijklmnopqrstuvwxyza"
# Determine putative key from B matrix
putative_key = []
for j in range(len(actual_key)):
max_prob_index = np.argmax(B[:, j])
putative_key.append(chr(ord("a") + max_prob_index))
# Calculate fraction of matching key elements
num_matching = sum(
1 for i in range(len(actual_key)) if actual_key[i] == putative_key[i]
)
fraction_matching = num_matching / len(actual_key)
# Print the results
print("Actual Key:", actual_key)
print("Putative Key:", "".join(putative_key))
print("Fraction of Matching Key Elements: {:.4f}".format(fraction_matching))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097235.ipynb
| null | null |
[{"Id": 129097235, "ScriptId": 38374490, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7936144, "CreationDate": "05/11/2023 01:59:23", "VersionNumber": 1.0, "Title": "notebook1bbb3e94e3", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 170.0, "LinesInsertedFromPrevious": 170.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import random
import pandas as pd
def encrypt(plaintext, key):
ciphertext = ""
for char in plaintext:
if char.isalpha():
char = char.lower()
index = ord(char) - ord("a")
char = key[index]
ciphertext += char
return ciphertext
def generate_key():
alphabet = list("abcdefghijklmnopqrstuvwxyz")
random.shuffle(alphabet)
return "".join(alphabet)
# Set the plaintext message
plaintext = pd.read_fwf("/kaggle/input/text1234/text1.txt")
# Remove punctuation, special characters, and spaces, and convert to lowercase
plaintext = "".join(filter(str.isalpha, plaintext)).lower()
# Generate a random key
key = generate_key()
# Encrypt the plaintext using the key
ciphertext = encrypt(plaintext, key)
print("Plaintext: ", plaintext)
print("Key: ", key)
print("Ciphertext: ", ciphertext)
import numpy as np
def generate_digraph_frequency_matrix(text):
matrix_size = 26
digraph_counts = np.zeros((matrix_size, matrix_size), dtype=int)
# Count digraph occurrences in the text
for i in range(len(text) - 1):
current_char = text[i]
next_char = text[i + 1]
if current_char.isalpha() and next_char.isalpha():
current_index = ord(current_char.lower()) - ord("a")
next_index = ord(next_char.lower()) - ord("a")
digraph_counts[current_index][next_index] += 1
# Add five to each element in the matrix
digraph_counts += 5
# Normalize the matrix by dividing each element by its row sum
row_sums = digraph_counts.sum(axis=1)
matrix = digraph_counts / row_sums[:, np.newaxis]
return matrix
# Set the English text to generate the digraph frequency matrix
text = "Your English text goes here..."
# Remove punctuation, special characters, and spaces, and convert to lowercase
text = "".join(filter(str.isalpha, text)).lower()
# Generate the digraph frequency matrix
digraph_matrix = generate_digraph_frequency_matrix(text)
# Print the digraph frequency matrix
print(digraph_matrix)
import numpy as np
def initialize_hmm(N, M):
# Initialize transition matrix A
A = digraph_matrix.copy()
# Initialize emission matrix B
B = np.random.rand(N, M)
B /= B.sum(axis=1, keepdims=True)
# Initialize initial state distribution pi
pi = np.random.rand(N)
pi /= pi.sum()
# Print the trained HMM parameters
return A, B, pi
def forward_backward(ciphertext, A, B, pi):
T = len(ciphertext)
N, M = B.shape
# Initialize forward and backward variables
forward = np.zeros((T, N))
backward = np.zeros((T, N))
# Compute forward variables
forward[0] = pi * B[:, ord(ciphertext[0].lower()) - ord("a")]
for t in range(1, T):
forward[t] = (
np.dot(forward[t - 1], A) * B[:, ord(ciphertext[t].lower()) - ord("a")]
)
# Compute backward variables
backward[-1] = 1
for t in range(T - 2, -1, -1):
backward[t] = np.dot(
A, B[:, ord(ciphertext[t + 1].lower()) - ord("a")] * backward[t + 1]
)
# Compute the scaled forward and backward variables
scale = np.sum(forward[-1])
forward /= scale
backward /= scale
return forward, backward
def baum_welch(ciphertext, A, B, pi, num_iterations):
T = len(ciphertext)
N, M = B.shape
for iteration in range(num_iterations):
# Expectation step
forward, backward = forward_backward(ciphertext, A, B, pi)
xi = np.zeros((T - 1, N, N))
for t in range(T - 1):
numerator = (
forward[t][:, np.newaxis]
* A
* B[:, ord(ciphertext[t + 1].lower()) - ord("a")].reshape((-1, 1))
* backward[t + 1]
)
denominator = np.sum(np.sum(numerator, axis=0), axis=0)
xi[t] = numerator / denominator
gamma = forward * backward
# Maximization step
A = np.sum(xi, axis=0) / np.sum(gamma[:-1], axis=0)[:, np.newaxis]
pi = gamma[0]
for t in range(T):
for j in range(M):
if ciphertext[t] == chr(ord("a") + j):
B[:, j] = np.sum(gamma[:, t], axis=1) / np.sum(gamma, axis=1)
return A, B, pi
A = digraph_matrix.copy()
B = np.random.rand(N, M)
B /= B.sum(axis=1, keepdims=True)
pi = np.random.rand(N)
pi /= pi.sum()
print("Transition matrix A:")
print(A)
print("\nEmission matrix B:")
print(B)
print("\nInitial state distribution pi:")
print(pi)
# Set the actual key
actual_key = "bcdefghijklmnopqrstuvwxyza"
# Determine putative key from B matrix
putative_key = []
for j in range(len(actual_key)):
max_prob_index = np.argmax(B[:, j])
putative_key.append(chr(ord("a") + max_prob_index))
# Calculate fraction of matching key elements
num_matching = sum(
1 for i in range(len(actual_key)) if actual_key[i] == putative_key[i]
)
fraction_matching = num_matching / len(actual_key)
# Print the results
print("Actual Key:", actual_key)
print("Putative Key:", "".join(putative_key))
print("Fraction of Matching Key Elements: {:.4f}".format(fraction_matching))
| false | 0 | 1,495 | 0 | 1,495 | 1,495 |
||
129097508
|
# ## Flight Delay Analysis
# ## Introduction
# ### Objective is to create a model that can classifier whether a flight will likely be delayed or not.
# ##### Notes: I created this analysis initially in Databricks.¶
# ##### Data source came from:
# ##### 1. https://www.transtats.bts.gov/DL_SelectFields.aspx?gnoyr_VQ=FGK&QO_fu146_anzr=b0-gvzr
# ##### 2. https://ourairports.com/data/
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import DoubleType, IntegerType, StringType
spark = SparkSession.builder.appName("FlightDelayAnalysis").getOrCreate()
# ## Reading and Preprocessing Data
df_airports = spark.read.options(header="true").csv(
"/kaggle/input/flightdelay-data/airports.csv"
)
df_flights = spark.read.options(header="true").csv(
"/kaggle/input/flightdelay-data/flights.csv"
)
df_airlines = spark.read.options(header="true").csv(
"/kaggle/input/flightdelay-data/airlines.csv"
)
df_flights.printSchema()
df_flights = df_flights.join(
df_airlines, df_flights.OP_UNIQUE_CARRIER == df_airlines.Code
)
df_flights = df_flights.drop(df_flights.Code)
display(df_airports.where(col("local_code") == "LAX").select("*"))
df_airports1 = (
df_airports.drop("continent")
.drop("iso_country")
.drop("iso_region")
.drop("gps_code")
.drop("iata_code")
.drop("home_link")
.drop("wikipedia_link")
.drop("keywords")
.drop("scheduled_service")
.drop("ident")
.drop("id")
)
df_joined_dep = (
df_flights.join(df_airports1, df_flights.ORIGIN == df_airports1.local_code, "inner")
.withColumnRenamed("type", "dep_type")
.withColumnRenamed("latitude_deg", "dep_lat")
.withColumnRenamed("longitude_deg", "dep_lon")
.withColumnRenamed("elevation_ft", "dep_elevation_ft")
.withColumnRenamed("municipality", "dep_municipality")
.withColumnRenamed("local_code", "dep_local_code")
.withColumnRenamed("name", "dep_name")
)
df_all_joined_dep_arr = (
df_joined_dep.join(
df_airports1, df_joined_dep.DEST == df_airports1.local_code, "inner"
)
.withColumnRenamed("type", "arr_type")
.withColumnRenamed("latitude_deg", "arr_lat")
.withColumnRenamed("longitude_deg", "arr_lon")
.withColumnRenamed("elevation_ft", "arr_elevation_ft")
.withColumnRenamed("municipality", "arr_municipality")
.withColumnRenamed("local_code", "arr_local_code")
.withColumnRenamed("name", "arr_name")
)
df_all_joined_dep_arr = df_all_joined_dep_arr.dropna("any")
df_all_add_2_cols = (
df_all.withColumn(
"dep_delay_int", when(col("DEP_DELAY") <= 0, 0).when(col("DEP_DELAY") > 1, 1)
)
.withColumn(
"arr_delay_int", when(col("ARR_DELAY") <= 0, 0).when(col("ARR_DELAY") > 1, 1)
)
.dropna()
)
df_all_add_2_cols.printSchema()
df_all_1 = df_all_add_2_cols.drop("DEP_DELAY").drop("ARR_DELAY")
display(df_all_1)
# ## Machine Learning - StringIndexer
from pyspark.ml import Pipeline
from pyspark.ml.feature import (
VectorAssembler,
StringIndexer,
VectorIndexer,
MinMaxScaler,
OneHotEncoder,
MaxAbsScaler,
)
from pyspark.ml.classification import LogisticRegression, DecisionTreeClassifier
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from pyspark.ml.evaluation import (
BinaryClassificationEvaluator,
MulticlassClassificationEvaluator,
)
from xgboost.spark import SparkXGBClassifier
pipeline_of_stringindexers = Pipeline(stages=indexers)
model0 = pipeline_of_stringindexers.fit(df_all_1).transform(df_all_1)
model0.printSchema()
display(model0)
model1 = (
model0.drop("OP_UNIQUE_CARRIER")
.drop("ORIGIN")
.drop("ORIGIN_STATE_NM")
.drop("DEST_CITY_MARKET_ID")
.drop("ORIGIN_STATE_ABR")
.drop("dep_municipality")
.drop("ORIGIN_CITY_NAME")
.drop("ORIGIN_AIRPORT_SEQ_ID")
.drop("DEST_AIRPORT_SEQ_ID")
.drop("DEST_STATE_NM")
.drop("DEST_STATE_NM")
.drop("DEP_TIME")
.drop("ARR_TIME")
.drop("arr_municipality")
.drop("DEST")
.drop("DEST_CITY_NAME")
.drop("DEST_STATE_ABRDEST_CITY_NAME")
.drop("Description")
.drop("DEST_STATE_ABR")
.drop("dep_type")
.drop("dep_name")
.drop("arr_type")
.drop("arr_name")
.drop("dep_local_code")
.drop("arr_local_code")
)
model1.printSchema()
display(model1)
# ## Heatmap
import matplotlib.pyplot as plt
import seaborn as sns
model1_pd = model1.toPandas()
fig, ax = plt.subplots(figsize=(40, 10))
sns.heatmap(model1_pd.corr(), annot=True)
feature_columns = [
"DAY_OF_WEEK",
"ORIGIN_AIRPORT_ID",
"DEST_AIRPORT_ID",
"DISTANCE",
"dep_lat",
"dep_lon",
"dep_elevation_ft",
"arr_lat",
"arr_lon",
"arr_elevation_ft",
"_OHE_OP_UNIQUE_CARRIER",
"_OHE_OP_DEST",
"_OHE_OP_ORIGIN",
"_OHE_ORIGIN_CITY_NAME",
"_OHE_DEST_CITY_NAME",
"_OHE_DEST_STATE_ABR",
"_OHE_description",
"_OHE_dep_type",
"_OHE_dep_name",
"_OHE_arr_type",
"_OHE_arr_name",
]
# op1: without using pipeline
# VectorAssembler: to add all the features into a single column
# assembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
# model2 = assembler.transform(model1)
# op2: using pipeline but MinMaxScaler not really working
# vectAssembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
# minMax = MinMaxScaler(inputCol="features", outputCol="normFeatures")
# pipeline = Pipeline(stages=[vectAssembler, minMax])
# model2 = pipeline.fit(model1).transform(model1)
# op3: using pipeline but MinMaxScaler not really working
vectAssembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
minMax = MinMaxScaler(inputCol="features", outputCol="normfeatures")
pipeline = Pipeline(stages=[vectAssembler, minMax])
model1_1 = pipeline.fit(model1).transform(model1)
finalized_data = model1_1.select("normfeatures", "dep_delay_int")
display(finalized_data)
# split data into train and test
train_data, test_data = finalized_data.randomSplit([0.8, 0.2], seed=42)
# ## 1st Set: ML Algorithms - MinMaxScaler
# ## 1st Set: Logistic Regression
# create LogisticRegression model then fit it to training data
train_data_lr = train_data
test_data_lr = test_data
lr = LogisticRegression(labelCol="dep_delay_int", featuresCol="normfeatures")
lr_model = lr.fit(train_data_lr)
# ## 1st Set: Decision Tree Classifier
# create DecisionTreeClassifier model then fit it to training data
train_data_dtc = train_data
test_data_dtc = test_data
dtc = DecisionTreeClassifier(labelCol="dep_delay_int", featuresCol="normfeatures")
dtc_model = dtc.fit(train_data_dtc)
# ## 1st Set: XGBoost
# create XGBoost model then fit it to training data
train_data_xgb = train_data
test_data_xgb = test_data
xgb = SparkXGBClassifier(
features_col="normfeatures", label_col="dep_delay_int", num_workers=2
)
xgb_model = xgb.fit(train_data_xgb)
# Evaluations
# Eval-Logistric Regression
predictions_df_lr = lr_model.transform(train_data_lr)
predictions_df_lr = (
predictions_df_lr.withColumnRenamed("prediction", "prediction_lr")
.withColumnRenamed("dep_delay_int", "dep_delay_int_lr")
.withColumnRenamed("rawPrediction", "rawPrediction_lr")
.withColumnRenamed("probability", "probability_lr")
)
predictions_df_lr.select(
"rawPrediction_lr", "probability_lr", "prediction_lr", "dep_delay_int_lr"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_lr = float(predictions_df_lr.filter("prediction_lr == 1.0 AND dep_delay_int_lr == 1").count())
# fp_lr = float(predictions_df_lr.filter("prediction_lr == 1.0 AND dep_delay_int_lr == 0").count())
# tn_lr = float(predictions_df_lr.filter("prediction_lr == 0.0 AND dep_delay_int_lr == 0").count())
# fn_lr = float(predictions_df_lr.filter("prediction_lr == 0.0 AND dep_delay_int_lr == 1").count())
# pr_lr = tp_lr / (tp_lr + fp_lr)
# re_lr = tp_lr / (tp_lr + fn_lr)
# metrics = spark.createDataFrame([
# ("TP", tp_lr),
# ("FP", fp_lr),
# ("TN", tn_lr),
# ("FN", fn_lr),
# ("Precision", pr_lr),
# ("Recall", re_lr),
# ("myAccuracy", (tp_lr+tn_lr)/(tp_lr+fp_lr+tn_lr+fn_lr)),
# ("F1", 2*pr_lr*re_lr/(re_lr+pr_lr))],["metric_for_lr1", "value"])
# metrics.show()
evaluator_lr_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr", predictionCol="prediction_lr", metricName="accuracy"
)
lr_accuracy = evaluator_lr_mc_acc.evaluate(predictions_df_lr)
evaluator_lr_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr",
predictionCol="prediction_lr",
metricName="precisionByLabel",
)
lr_precision = evaluator_lr_mc_precision.evaluate(predictions_df_lr)
evaluator_lr_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr",
predictionCol="prediction_lr",
metricName="recallByLabel",
)
lr_recall = evaluator_lr_mc_recall.evaluate(predictions_df_lr)
evaluator_lr_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr", predictionCol="prediction_lr", metricName="f1"
)
lr_f1 = evaluator_lr_mc_f1.evaluate(predictions_df_lr)
# area under ROC
evaluator_lr_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_lr",
rawPredictionCol="prediction_lr",
metricName="areaUnderROC",
)
lr_areaUnderROC = evaluator_lr_bc.evaluate(predictions_df_lr)
# Eval-Decision Tree Classifier
predictions_df_dtc = dtc_model.transform(train_data_dtc)
predictions_df_dtc = (
predictions_df_dtc.withColumnRenamed("prediction", "prediction_dtc")
.withColumnRenamed("dep_delay_int", "dep_delay_int_dtc")
.withColumnRenamed("rawPrediction", "rawPrediction_dtc")
.withColumnRenamed("probability", "probability_dtc")
)
predictions_df_dtc.select(
"rawPrediction_dtc", "probability_dtc", "prediction_dtc", "dep_delay_int_dtc"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_dtc = float(predictions_df_dtc.filter("prediction_dtc == 1.0 AND dep_delay_int_dtc == 1").count())
# fp_dtc = float(predictions_df_dtc.filter("prediction_dtc == 1.0 AND dep_delay_int_dtc == 0").count())
# tn_dtc = float(predictions_df_dtc.filter("prediction_dtc == 0.0 AND dep_delay_int_dtc == 0").count())
# fn_dtc = float(predictions_df_dtc.filter("prediction_dtc == 0.0 AND dep_delay_int_dtc == 1").count())
# pr_dtc = tp_dtc / (tp_dtc + fp_dtc)
# re_dtc = tp_dtc / (tp_dtc + fn_dtc)
# metrics = spark.createDataFrame([
# ("TP", tp_dtc),
# ("FP", fp_dtc),
# ("TN", tn_dtc),
# ("FN", fn_dtc),
# ("Precision", pr_dtc),
# ("Recall", re_dtc),
# ("myAccuracy", (tp_dtc+tn_dtc)/(tp_dtc+fp_dtc+tn_dtc+fn_dtc)),
# ("F1", 2*pr_dtc*re_dtc/(re_dtc+pr_dtc))],["metric_for_dtc1", "value"])
# metrics.show()
evaluator_dtc_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc", predictionCol="prediction_dtc", metricName="accuracy"
)
dtc_accuracy = evaluator_dtc_mc_acc.evaluate(predictions_df_dtc)
evaluator_dtc_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc",
predictionCol="prediction_dtc",
metricName="precisionByLabel",
)
dtc_precision = evaluator_dtc_mc_precision.evaluate(predictions_df_dtc)
evaluator_dtc_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc",
predictionCol="prediction_dtc",
metricName="recallByLabel",
)
dtc_recall = evaluator_dtc_mc_recall.evaluate(predictions_df_dtc)
evaluator_dtc_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc", predictionCol="prediction_dtc", metricName="f1"
)
dtc_f1 = evaluator_dtc_mc_f1.evaluate(predictions_df_dtc)
# area under ROC
evaluator_dtc_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_dtc",
rawPredictionCol="prediction_dtc",
metricName="areaUnderROC",
)
dtc_areaUnderROC = evaluator_dtc_bc.evaluate(predictions_df_dtc)
# Eval-XGBoost Classifier
predictions_df_xgb = xgb_model.transform(train_data_xgb)
predictions_df_xgb = (
predictions_df_xgb.withColumnRenamed("prediction", "prediction_xgb")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb")
.withColumnRenamed("probability", "probability_xgb")
)
predictions_df_xgb.select(
"rawPrediction_xgb", "probability_xgb", "prediction_xgb", "dep_delay_int_xgb"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb = float(predictions_df_xgb.filter("prediction_xgb == 1.0 AND dep_delay_int_xgb == 1").count())
# fp_xgb = float(predictions_df_xgb.filter("prediction_xgb == 1.0 AND dep_delay_int_xgb == 0").count())
# tn_xgb = float(predictions_df_xgb.filter("prediction_xgb == 0.0 AND dep_delay_int_xgb == 0").count())
# fn_xgb = float(predictions_df_xgb.filter("prediction_xgb == 0.0 AND dep_delay_int_xgb == 1").count())
# pr_xgb = tp_xgb / (tp_xgb + fp_xgb)
# re_xgb = tp_xgb / (tp_xgb + fn_xgb)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb),
# ("FP", fp_xgb),
# ("TN", tn_xgb),
# ("FN", fn_xgb),
# ("Precision", pr_xgb),
# ("Recall", re_xgb),
# ("myAccuracy", (tp_xgb+tn_xgb)/(tp_xgb+fp_xgb+tn_xgb+fn_xgb)),
# ("F1", 2*pr_xgb*re_xgb/(re_xgb+pr_xgb))],["metric_for_xgb1", "value"])
# metrics.show()
evaluator_xgb_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb", predictionCol="prediction_xgb", metricName="accuracy"
)
xgb_accuracy = evaluator_xgb_mc_acc.evaluate(predictions_df_xgb)
evaluator_xgb_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb",
predictionCol="prediction_xgb",
metricName="precisionByLabel",
)
xgb_precision = evaluator_xgb_mc_precision.evaluate(predictions_df_xgb)
evaluator_xgb_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb",
predictionCol="prediction_xgb",
metricName="recallByLabel",
)
xgb_recall = evaluator_xgb_mc_recall.evaluate(predictions_df_xgb)
evaluator_xgb_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb", predictionCol="prediction_xgb", metricName="f1"
)
xgb_f1 = evaluator_xgb_mc_f1.evaluate(predictions_df_xgb)
# area under ROC
evaluator_xgb_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb",
rawPredictionCol="prediction_xgb",
metricName="areaUnderROC",
)
xgb_areaUnderROC = evaluator_xgb_bc.evaluate(predictions_df_xgb)
# ## 1st Set: Metrics
print(f"LR1 Accuracy: {lr_accuracy}")
print(f"LR1 Precision: {lr_precision}")
print(f"LR1 Recall: {lr_recall}")
print(f"LR1 F1: {lr_f1}")
print(f"LR1 AreaUnderROC: {lr_areaUnderROC}")
print(f"DTC1 Accuracy: {dtc_accuracy}")
print(f"DTC1 Precision: {dtc_precision}")
print(f"DTC1 Recall: {dtc_recall}")
print(f"DTC1 F1: {dtc_f1}")
print(f"DTC1 AreaUnderROC: {dtc_areaUnderROC}")
print(f"XGB1 Accuracy: {xgb_accuracy}")
print(f"XGB1 Precision: {xgb_precision}")
print(f"XGB1 Recall: {xgb_recall}")
print(f"XGB1 F1: {xgb_f1}")
print(f"XGB1 AreaUnderROC: {xgb_areaUnderROC}")
# ## 2nd Set: ML Algorithms - Added OneHotEncoder
display(model1)
from pyspark.ml.feature import OneHotEncoder
model2 = model1
model2_pd = model2.toPandas()
display(model2_pd)
# onehotencoder for _OHE_OP_UNIQUE_CARRIER column
encoder = OneHotEncoder(inputCol="_OHE_OP_UNIQUE_CARRIER", outputCol="carrier_onehot")
encoded_df = encoder.fit(model2).transform(model2)
display(encoded_df)
from pyspark.ml.functions import vector_to_array
df_col_onehot = encoded_df.select(
"*", vector_to_array("carrier_onehot").alias("col_onehot")
)
display(df_col_onehot)
num_categories = len(df_col_onehot.first()["col_onehot"]) # 3
display(df_col_onehot.first())
num_categories = len(df_col_onehot.first()["col_onehot"]) # 3
cols_expanded = [(col("col_onehot")[i]) for i in range(num_categories)]
df_cols_onehot2 = df_col_onehot.select("*", *cols_expanded)
display(df_cols_onehot2)
# onehotencoder for day_of_week
encoder_dayofweek = OneHotEncoder(
inputCol="DAY_OF_WEEK", outputCol="day_of_week_onehot"
)
encoded_df_dayofweek = encoder_dayofweek.fit(df_cols_onehot2).transform(df_cols_onehot2)
df_col_onehot_dayofweek = encoded_df_dayofweek.select(
"*", vector_to_array("day_of_week_onehot").alias("col_onehot_day_of_week")
)
num_categories_dayofweek = len(
df_col_onehot_dayofweek.first()["col_onehot_day_of_week"]
) # 3
cols_expanded_dayofweek = [
(col("col_onehot_day_of_week")[i]) for i in range(num_categories_dayofweek)
]
df_cols_onehot_day_of_week = df_col_onehot_dayofweek.select(
"*", *cols_expanded_dayofweek
)
display(df_cols_onehot_day_of_week)
vectAssembler2 = VectorAssembler(inputCols=feature_columns2, outputCol="features2")
minMax2 = MinMaxScaler(inputCol="features2", outputCol="normfeatures2")
pipeline2 = Pipeline(stages=[vectAssembler2, minMax2])
model2 = pipeline2.fit(df_cols_onehot_day_of_week).transform(df_cols_onehot_day_of_week)
display(model2)
finalized_data2 = model2.select(
"carrier_onehot", "day_of_week_onehot", "normfeatures2", "dep_delay_int"
)
display(finalized_data2)
train_data2, test_data2 = finalized_data2.randomSplit([0.8, 0.2], seed=42)
# ## 2nd Set: Logistic Regression
train_data_lr2 = train_data2
test_data_lr2 = test_data2
lr2 = LogisticRegression(labelCol="dep_delay_int", featuresCol="normfeatures2")
lr2_model = lr2.fit(train_data_lr2)
predictions_df_lr2 = lr2_model.transform(train_data_lr2)
predictions_df_lr2 = (
predictions_df_lr2.withColumnRenamed("prediction", "prediction_lr2")
.withColumnRenamed("dep_delay_int", "dep_delay_int_lr2")
.withColumnRenamed("rawPrediction", "rawPrediction_lr2")
.withColumnRenamed("probability", "probability_lr2")
)
predictions_df_lr2.select(
"rawPrediction_lr2", "probability_lr2", "prediction_lr2", "dep_delay_int_lr2"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 1.0 AND dep_delay_int_lr2 == 1").count())
# fp_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 1.0 AND dep_delay_int_lr2 == 0").count())
# tn_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 0.0 AND dep_delay_int_lr2 == 0").count())
# fn_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 0.0 AND dep_delay_int_lr2 == 1").count())
# pr_lr2 = tp_lr2 / (tp_lr2 + fp_lr2)
# re_lr2 = tp_lr2 / (tp_lr2 + fn_lr2)
# metrics = spark.createDataFrame([
# ("TP", tp_lr2),
# ("FP", fp_lr2),
# ("TN", tn_lr2),
# ("FN", fn_lr2),
# ("Precision", pr_lr2),
# ("Recall", re_lr2),
# ("myAccuracy", (tp_lr2+tn_lr2)/(tp_lr2+fp_lr2+tn_lr2+fn_lr2)),
# ("F1", 2*pr_lr2*re_lr2/(re_lr2+pr_lr2))],["metric_for_lr2", "value"])
# metrics.show()
evaluator_lr2_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2", predictionCol="prediction_lr2", metricName="accuracy"
)
lr2_accuracy = evaluator_lr2_mc_acc.evaluate(predictions_df_lr2)
evaluator_lr2_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2",
predictionCol="prediction_lr2",
metricName="precisionByLabel",
)
lr2_precision = evaluator_lr2_mc_precision.evaluate(predictions_df_lr2)
evaluator_lr2_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2",
predictionCol="prediction_lr2",
metricName="recallByLabel",
)
lr2_recall = evaluator_lr2_mc_recall.evaluate(predictions_df_lr2)
evaluator_lr2_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2", predictionCol="prediction_lr2", metricName="f1"
)
lr2_f1 = evaluator_lr2_mc_f1.evaluate(predictions_df_lr2)
# area under ROC
evaluator_lr2_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_lr2",
rawPredictionCol="prediction_lr2",
metricName="areaUnderROC",
)
lr2_areaUnderROC = evaluator_lr2_bc.evaluate(predictions_df_lr2)
# ## 2nd Set: Decision Tree Classifier
# create DecisionTreeClassifier model then fit it to training data
train_data_dtc2 = train_data2
test_data_dtc2 = test_data2
dtc2 = DecisionTreeClassifier(labelCol="dep_delay_int", featuresCol="normfeatures2")
dtc_model2 = dtc2.fit(train_data_dtc2)
predictions_df_dtc2 = dtc_model2.transform(train_data_dtc2)
predictions_df_dtc2 = (
predictions_df_dtc2.withColumnRenamed("prediction", "prediction_dtc2")
.withColumnRenamed("dep_delay_int", "dep_delay_int_dtc2")
.withColumnRenamed("rawPrediction", "rawPrediction_dtc2")
.withColumnRenamed("probability", "probability_dtc2")
)
predictions_df_dtc2.select(
"rawPrediction_dtc2", "probability_dtc2", "prediction_dtc2", "dep_delay_int_dtc2"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 1.0 AND dep_delay_int_dtc2 == 1").count())
# fp_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 1.0 AND dep_delay_int_dtc2 == 0").count())
# tn_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 0.0 AND dep_delay_int_dtc2 == 0").count())
# fn_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 0.0 AND dep_delay_int_dtc2 == 1").count())
# pr_dtc2 = tp_dtc2 / (tp_dtc2 + fp_dtc2)
# re_dtc2 = tp_dtc2 / (tp_dtc2 + fn_dtc2)
# metrics = spark.createDataFrame([
# ("TP", tp_dtc2),
# ("FP", fp_dtc2),
# ("TN", tn_dtc2),
# ("FN", fn_dtc2),
# ("Precision", pr_dtc2),
# ("Recall", re_dtc2),
# ("myAccuracy", (tp_dtc2+tn_dtc2)/(tp_dtc2+fp_dtc2+tn_dtc2+fn_dtc2)),
# ("F1", 2*pr_dtc2*re_dtc2/(re_dtc2+pr_dtc2))],["metric_for_dtc2", "value"])
# metrics.show()
evaluator_dtc2_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
predictionCol="prediction_dtc2",
metricName="accuracy",
)
dtc2_accuracy = evaluator_dtc2_mc_acc.evaluate(predictions_df_dtc2)
evaluator_dtc2_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
predictionCol="prediction_dtc2",
metricName="precisionByLabel",
)
dtc2_precision = evaluator_dtc2_mc_precision.evaluate(predictions_df_dtc2)
evaluator_dtc2_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
predictionCol="prediction_dtc2",
metricName="recallByLabel",
)
dtc2_recall = evaluator_dtc2_mc_recall.evaluate(predictions_df_dtc2)
evaluator_dtc2_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2", predictionCol="prediction_dtc2", metricName="f1"
)
dtc2_f1 = evaluator_dtc2_mc_f1.evaluate(predictions_df_dtc2)
# area under ROC
evaluator_dtc2_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
rawPredictionCol="prediction_dtc2",
metricName="areaUnderROC",
)
dtc2_areaUnderROC = evaluator_dtc2_bc.evaluate(predictions_df_dtc2)
# ## 2nd Set: XGBoost
train_data_xgb2 = train_data2
test_data_xgb2 = test_data2
from xgboost.spark import SparkXGBClassifier
xgb2 = SparkXGBClassifier(
features_col="normfeatures2", label_col="dep_delay_int", num_workers=2
)
xgb_model2 = xgb2.fit(train_data_xgb2)
predictions_df_xgb2 = xgb_model2.transform(train_data_xgb2)
predictions_df_xgb2 = (
predictions_df_xgb2.withColumnRenamed("prediction", "prediction_xgb2")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb2")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb2")
.withColumnRenamed("probability", "probability_xgb2")
)
predictions_df_xgb2.select(
"rawPrediction_xgb2", "probability_xgb2", "prediction_xgb2", "dep_delay_int_xgb2"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 1.0 AND dep_delay_int_xgb2 == 1").count())
# fp_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 1.0 AND dep_delay_int_xgb2 == 0").count())
# tn_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 0.0 AND dep_delay_int_xgb2 == 0").count())
# fn_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 0.0 AND dep_delay_int_xgb2 == 1").count())
# pr_xgb2 = tp_xgb2 / (tp_xgb2 + fp_xgb2)
# re_xgb2 = tp_xgb2 / (tp_xgb2 + fn_xgb2)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb2),
# ("FP", fp_xgb2),
# ("TN", tn_xgb2),
# ("FN", fn_xgb2),
# ("Precision", pr_xgb2),
# ("Recall", re_xgb2),
# ("myAccuracy", (tp_xgb2+tn_xgb2)/(tp_xgb2+fp_xgb2+tn_xgb2+fn_xgb2)),
# ("F1", 2*pr_xgb2*re_xgb2/(re_xgb2+pr_xgb2))],["metric_for_xgb2", "value"])
# metrics.show()
evaluator_xgb2_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
predictionCol="prediction_xgb2",
metricName="accuracy",
)
xgb2_accuracy = evaluator_xgb2_mc_acc.evaluate(predictions_df_xgb2)
evaluator_xgb2_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
predictionCol="prediction_xgb2",
metricName="precisionByLabel",
)
xgb2_precision = evaluator_xgb2_mc_precision.evaluate(predictions_df_xgb2)
evaluator_xgb2_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
predictionCol="prediction_xgb2",
metricName="recallByLabel",
)
xgb2_recall = evaluator_xgb2_mc_recall.evaluate(predictions_df_xgb2)
evaluator_xgb2_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2", predictionCol="prediction_xgb2", metricName="f1"
)
xgb2_f1 = evaluator_xgb2_mc_f1.evaluate(predictions_df_xgb2)
# area under ROC
evaluator_xgb2_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
rawPredictionCol="prediction_xgb2",
metricName="areaUnderROC",
)
xgb2_areaUnderROC = evaluator_xgb2_bc.evaluate(predictions_df_xgb2)
# ## 2nd Set: Metrics
print(f"LR2 Accuracy: {lr2_accuracy}")
print(f"LR2 Precision: {lr2_precision}")
print(f"LR2 Recall: {lr2_recall}")
print(f"LR2 F1: {lr2_f1}")
print(f"LR2 AreaUnderROC: {lr2_areaUnderROC}")
print(f"DTC2 Accuracy: {dtc2_accuracy}")
print(f"DTC2 Precision: {dtc2_precision}")
print(f"DTC2 Recall: {dtc2_recall}")
print(f"DTC2 F1: {dtc2_f1}")
print(f"DTC2 AreaUnderROC: {dtc2_areaUnderROC}")
print(f"XGB2 Accuracy: {xgb2_accuracy}")
print(f"XGB2 Precision: {xgb2_precision}")
print(f"XGB2 Recall: {xgb2_recall}")
print(f"XGB2 F1: {xgb2_f1}")
print(f"XGB2 AreaUnderROC: {xgb2_areaUnderROC}")
# ## 3rd Set: ML Algorithms - Switched to MaxAbsScaler
df_v3 = df_cols_onehot_day_of_week
vectAssembler3 = VectorAssembler(inputCols=feature_columns3, outputCol="features3")
maxAbs = MaxAbsScaler(inputCol="features3", outputCol="normfeatures3")
pipeline3 = Pipeline(stages=[vectAssembler3, maxAbs])
model3 = pipeline3.fit(df_v3).transform(df_v3)
finalized_data3 = model3.select("normfeatures3", "dep_delay_int")
display(finalized_data3)
train_data3, test_data3 = finalized_data3.randomSplit([0.8, 0.2], seed=42)
# ## 3rd Set: Logistic Regression
train_data_lr3 = train_data3
test_data_lr3 = test_data3
lr3 = LogisticRegression(labelCol="dep_delay_int", featuresCol="normfeatures3")
lr3_model = lr3.fit(train_data_lr3)
predictions_df_lr3 = lr3_model.transform(train_data_lr3)
predictions_df_lr3 = (
predictions_df_lr3.withColumnRenamed("prediction", "prediction_lr3")
.withColumnRenamed("dep_delay_int", "dep_delay_int_lr3")
.withColumnRenamed("rawPrediction", "rawPrediction_lr3")
.withColumnRenamed("probability", "probability_lr3")
)
predictions_df_lr3.select(
"rawPrediction_lr3", "probability_lr3", "prediction_lr3", "dep_delay_int_lr3"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 1.0 AND dep_delay_int_lr3 == 1").count())
# fp_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 1.0 AND dep_delay_int_lr3 == 0").count())
# tn_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 0.0 AND dep_delay_int_lr3 == 0").count())
# fn_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 0.0 AND dep_delay_int_lr3 == 1").count())
# pr_lr3 = tp_lr3 / (tp_lr3 + fp_lr3)
# re_lr3 = tp_lr3 / (tp_lr3 + fn_lr3)
# metrics = spark.createDataFrame([
# ("TP", tp_lr3),
# ("FP", fp_lr3),
# ("TN", tn_lr3),
# ("FN", fn_lr3),
# ("Precision", pr_lr3),
# ("Recall", re_lr3),
# ("myAccuracy", (tp_lr3+tn_lr3)/(tp_lr3+fp_lr3+tn_lr3+fn_lr3)),
# ("F1", 2*pr_lr3*re_lr3/(re_lr3+pr_lr3))],["metric_for_lr3", "value"])
# metrics.show()
evaluator_lr3_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3", predictionCol="prediction_lr3", metricName="accuracy"
)
lr3_accuracy = evaluator_lr3_mc_acc.evaluate(predictions_df_lr3)
evaluator_lr3_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3",
predictionCol="prediction_lr3",
metricName="precisionByLabel",
)
lr3_precision = evaluator_lr3_mc_precision.evaluate(predictions_df_lr3)
evaluator_lr3_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3",
predictionCol="prediction_lr3",
metricName="recallByLabel",
)
lr3_recall = evaluator_lr3_mc_recall.evaluate(predictions_df_lr3)
evaluator_lr3_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3", predictionCol="prediction_lr3", metricName="f1"
)
lr3_f1 = evaluator_lr3_mc_f1.evaluate(predictions_df_lr3)
# area under ROC
evaluator_lr3_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_lr3",
rawPredictionCol="prediction_lr3",
metricName="areaUnderROC",
)
lr3_areaUnderROC = evaluator_lr3_bc.evaluate(predictions_df_lr3)
# ## 3rd Set: Decision Tree Classifier
train_data_dtc3 = train_data3
test_data_dtc3 = test_data3
dtc3 = DecisionTreeClassifier(labelCol="dep_delay_int", featuresCol="normfeatures3")
dtc_model3 = dtc3.fit(train_data_dtc3)
predictions_df_dtc3 = dtc_model3.transform(train_data_dtc3)
predictions_df_dtc3 = (
predictions_df_dtc3.withColumnRenamed("prediction", "prediction_dtc3")
.withColumnRenamed("dep_delay_int", "dep_delay_int_dtc3")
.withColumnRenamed("rawPrediction", "rawPrediction_dtc3")
.withColumnRenamed("probability", "probability_dtc3")
)
predictions_df_dtc3.select(
"rawPrediction_dtc3", "probability_dtc3", "prediction_dtc3", "dep_delay_int_dtc3"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 1.0 AND dep_delay_int_dtc3 == 1").count())
# fp_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 1.0 AND dep_delay_int_dtc3 == 0").count())
# tn_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 0.0 AND dep_delay_int_dtc3 == 0").count())
# fn_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 0.0 AND dep_delay_int_dtc3 == 1").count())
# pr_dtc3 = tp_dtc3 / (tp_dtc3 + fp_dtc3)
# re_dtc3 = tp_dtc3 / (tp_dtc3 + fn_dtc3)
# metrics = spark.createDataFrame([
# ("TP", tp_dtc3),
# ("FP", fp_dtc3),
# ("TN", tn_dtc3),
# ("FN", fn_dtc3),
# ("Precision", pr_dtc3),
# ("Recall", re_dtc3),
# ("myAccuracy", (tp_dtc3+tn_dtc3)/(tp_dtc3+fp_dtc3+tn_dtc3+fn_dtc3)),
# ("F1", 2*pr_dtc3*re_dtc3/(re_dtc3+pr_dtc3))],["metric_for_dtc3", "value"])
# metrics.show()
evaluator_dtc3_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
predictionCol="prediction_dtc3",
metricName="accuracy",
)
dtc3_accuracy = evaluator_dtc3_mc_acc.evaluate(predictions_df_dtc3)
evaluator_dtc3_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
predictionCol="prediction_dtc3",
metricName="precisionByLabel",
)
dtc3_precision = evaluator_dtc3_mc_precision.evaluate(predictions_df_dtc3)
evaluator_dtc3_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
predictionCol="prediction_dtc3",
metricName="recallByLabel",
)
dtc3_recall = evaluator_dtc3_mc_recall.evaluate(predictions_df_dtc3)
evaluator_dtc3_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3", predictionCol="prediction_dtc3", metricName="f1"
)
dtc3_f1 = evaluator_dtc3_mc_f1.evaluate(predictions_df_dtc3)
# area under ROC
evaluator_dtc3_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
rawPredictionCol="prediction_dtc3",
metricName="areaUnderROC",
)
dtc3_areaUnderROC = evaluator_dtc3_bc.evaluate(predictions_df_dtc3)
# ## 3rd Set: XGBoost
train_data_xgb3 = train_data3
test_data_xgb3 = test_data3
from xgboost.spark import SparkXGBClassifier
xgb3 = SparkXGBClassifier(
features_col="normfeatures3", label_col="dep_delay_int", num_workers=2
)
xgb_model3 = xgb3.fit(train_data_xgb3)
predictions_df_xgb3 = xgb_model3.transform(train_data_xgb3)
predictions_df_xgb3 = (
predictions_df_xgb3.withColumnRenamed("prediction", "prediction_xgb3")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb3")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb3")
.withColumnRenamed("probability", "probability_xgb3")
)
predictions_df_xgb3.select(
"rawPrediction_xgb3", "probability_xgb3", "prediction_xgb3", "dep_delay_int_xgb3"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 1.0 AND dep_delay_int_xgb3 == 1").count())
# fp_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 1.0 AND dep_delay_int_xgb3 == 0").count())
# tn_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 0.0 AND dep_delay_int_xgb3 == 0").count())
# fn_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 0.0 AND dep_delay_int_xgb3 == 1").count())
# pr_xgb3 = tp_dtc3 / (tp_xgb3 + fp_xgb3)
# re_xgb3 = tp_dtc3 / (tp_xgb3 + fn_xgb3)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb3),
# ("FP", fp_xgb3),
# ("TN", tn_xgb3),
# ("FN", fn_xgb3),
# ("Precision", pr_xgb3),
# ("Recall", re_xgb3),
# ("myAccuracy", (tp_xgb3+tn_xgb3)/(tp_xgb3+fp_xgb3+tn_xgb3+fn_xgb3)),
# ("F1", 2*pr_xgb3*re_xgb3/(re_xgb3+pr_xgb3))],["metric_for_xgb3", "value"])
# metrics.show()
evaluator_xgb3_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
predictionCol="prediction_xgb3",
metricName="accuracy",
)
xgb3_accuracy = evaluator_xgb3_mc_acc.evaluate(predictions_df_xgb3)
evaluator_xgb3_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
predictionCol="prediction_xgb3",
metricName="precisionByLabel",
)
xgb3_precision = evaluator_xgb3_mc_precision.evaluate(predictions_df_xgb3)
evaluator_xgb3_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
predictionCol="prediction_xgb3",
metricName="recallByLabel",
)
xgb3_recall = evaluator_xgb3_mc_recall.evaluate(predictions_df_xgb3)
evaluator_xgb3_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3", predictionCol="prediction_xgb3", metricName="f1"
)
xgb3_f1 = evaluator_xgb3_mc_f1.evaluate(predictions_df_xgb3)
# area under ROC
evaluator_xgb3_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
rawPredictionCol="prediction_xgb3",
metricName="areaUnderROC",
)
xgb3_areaUnderROC = evaluator_xgb3_bc.evaluate(predictions_df_xgb3)
# ## 3rd Set: Metrics
print(f"LR3 Accuracy: {lr3_accuracy}")
print(f"LR3 Precision: {lr3_precision}")
print(f"LR3 Recall: {lr3_recall}")
print(f"LR3 F1: {lr3_f1}")
print(f"LR3 AreaUnderROC: {lr3_areaUnderROC}")
print(f"DTC3 Accuracy: {dtc3_accuracy}")
print(f"DTC3 Precision: {dtc3_precision}")
print(f"DTC3 Recall: {dtc3_recall}")
print(f"DTC3 F1: {dtc3_f1}")
print(f"DTC3 AreaUnderROC: {dtc3_areaUnderROC}")
print(f"XGB3 Accuracy: {xgb3_accuracy}")
print(f"XGB3 Precision: {xgb3_precision}")
print(f"XGB3 Recall: {xgb3_recall}")
print(f"XGB3 F1: {xgb3_f1}")
print(f"XGB3 AreaUnderROC: {xgb3_areaUnderROC}")
# ## Analysis Comparing Metrics
# ##### 3 sets of Logistic Regression, Decision Tree, and XGBoost were ran.
# ##### The first set used StringIndexer, VectAssembler, and MinMaxScaler.
# ##### The second set used StringIndexer, OneHotEncoder, VectAssembler, and MinMaxScaler.
# ##### The third set used StringIndexer, OneHotEncoder, VectAssembler, and MaxAbsScaler.
# ##### Based on the metrics above, the second set performed the best and within that set, XGBoost performed the best. Accuracy: 0.6685, Precision: 0.6799, Recall: 0.8955, F1: 0.6297, and Area Under ROC: 0.5885. The scores could be better. This means that the features in this dataset are not sufficient enough in teaching the network how to predict whether or not a flight would be delayed. Additional features such as weather conditions would be beneficial to have as features and should help increase those scores.
# ## Feature Importance
print(feature_columns2)
xgb_model2.get_booster().feature_names = feature_columns2
important_features = xgb_model2.get_booster().get_score(importance_type="gain")
display(important_features)
i_f_sorted = {
k: v
for k, v in sorted(
important_features.items(), key=lambda item: item[1], reverse=True
)
}
print(i_f_sorted)
import pandas as pd
i_f_df = pd.DataFrame(
{"Features": i_f_sorted.keys(), "Importance": i_f_sorted.values()}
)
display(i_f_df)
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 15))
plt.barh(i_f_df.Features, i_f_df.Importance)
plt.xlabel("Importance")
plt.ylabel("Feature")
plt.legend(["Score"])
plt.title("Feature Importance")
plt.tight_layout
plt.show()
# ## Analysis
# ##### The feature with the most impact in helping to predict whether or not a flight would be delayed is the airline carrier (represented as col_onehot[n], with n representing a specific airline carrier). Based on the plot, the top four highest features are all related to airline carriers. This is followed by the departure name, meaning certain departing airports are more prone to have delays. This is probably related to how busy and how much traffic that airport typically has. The next feature of highest significance would be col_onehot_day_of_week[3], which represents Wednesday, followed by Saturday.
#
predictions_df_xgb2_test = xgb_model2.transform(test_data_xgb2)
predictions_df_xgb2_test_columns_renamed = (
predictions_df_xgb2_test.withColumnRenamed("prediction", "prediction_xgb2_test")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb2_test")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb2_test")
.withColumnRenamed("probability", "probability_xgb2_test")
)
predictions_df_xgb2_test_columns_renamed.select(
"rawPrediction_xgb2_test",
"probability_xgb2_test",
"prediction_xgb2_test",
"dep_delay_int_xgb2_test",
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 1.0 AND dep_delay_int_xgb2_test == 1").count())
# fp_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 1.0 AND dep_delay_int_xgb2_test == 0").count())
# tn_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 0.0 AND dep_delay_int_xgb2_test == 0").count())
# fn_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 0.0 AND dep_delay_int_xgb2_test == 1").count())
# pr_xgb2_test = tp_xgb2_test / (tp_xgb2_test + fp_xgb2_test)
# re_xgb2_test = tp_xgb2_test / (tp_xgb2_test + fn_xgb2_test)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb2_test),
# ("FP", fp_xgb2_test),
# ("TN", tn_xgb2_test),
# ("FN", fn_xgb2_test),
# ("Precision", pr_xgb2_test),
# ("Recall", re_xgb2_test),
# ("myAccuracy", (tp_xgb2_test+tn_xgb2_test)/(tp_xgb2_test+fp_xgb2_test+tn_xgb2_test+fn_xgb2_test)),
# ("F1", 2*pr_xgb2_test*re_xgb2_test/(re_xgb2_test+pr_xgb2_test))],["metric", "value"])
# metrics.show()
evaluator_xgbtest_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="accuracy",
)
xgbtest_accuracy = evaluator_xgbtest_mc_acc.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest Accuracy: {xgbtest_accuracy}")
evaluator_xgbtest_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="precisionByLabel",
)
xgbtest_precision = evaluator_xgbtest_mc_precision.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest Precision: {xgbtest_precision}")
evaluator_xgbtest_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="recallByLabel",
)
xgbtest_recall = evaluator_xgbtest_mc_recall.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest Recall: {xgbtest_recall}")
evaluator_xgbtest_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="f1",
)
xgbtest_f1 = evaluator_xgbtest_mc_f1.evaluate(predictions_df_xgb2_test_columns_renamed)
print(f"XGBTest F1: {xgbtest_f1}")
# area under ROC
evaluator_xgbtest_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
rawPredictionCol="prediction_xgb2_test",
metricName="areaUnderROC",
)
xgbtest_areaUnderROC = evaluator_xgbtest_bc.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest AreaUnderROC: {xgbtest_areaUnderROC}")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097508.ipynb
| null | null |
[{"Id": 129097508, "ScriptId": 38372163, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8831419, "CreationDate": "05/11/2023 02:03:51", "VersionNumber": 2.0, "Title": "FlightDelay-Analysis", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 757.0, "LinesInsertedFromPrevious": 63.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 694.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# ## Flight Delay Analysis
# ## Introduction
# ### Objective is to create a model that can classifier whether a flight will likely be delayed or not.
# ##### Notes: I created this analysis initially in Databricks.¶
# ##### Data source came from:
# ##### 1. https://www.transtats.bts.gov/DL_SelectFields.aspx?gnoyr_VQ=FGK&QO_fu146_anzr=b0-gvzr
# ##### 2. https://ourairports.com/data/
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import DoubleType, IntegerType, StringType
spark = SparkSession.builder.appName("FlightDelayAnalysis").getOrCreate()
# ## Reading and Preprocessing Data
df_airports = spark.read.options(header="true").csv(
"/kaggle/input/flightdelay-data/airports.csv"
)
df_flights = spark.read.options(header="true").csv(
"/kaggle/input/flightdelay-data/flights.csv"
)
df_airlines = spark.read.options(header="true").csv(
"/kaggle/input/flightdelay-data/airlines.csv"
)
df_flights.printSchema()
df_flights = df_flights.join(
df_airlines, df_flights.OP_UNIQUE_CARRIER == df_airlines.Code
)
df_flights = df_flights.drop(df_flights.Code)
display(df_airports.where(col("local_code") == "LAX").select("*"))
df_airports1 = (
df_airports.drop("continent")
.drop("iso_country")
.drop("iso_region")
.drop("gps_code")
.drop("iata_code")
.drop("home_link")
.drop("wikipedia_link")
.drop("keywords")
.drop("scheduled_service")
.drop("ident")
.drop("id")
)
df_joined_dep = (
df_flights.join(df_airports1, df_flights.ORIGIN == df_airports1.local_code, "inner")
.withColumnRenamed("type", "dep_type")
.withColumnRenamed("latitude_deg", "dep_lat")
.withColumnRenamed("longitude_deg", "dep_lon")
.withColumnRenamed("elevation_ft", "dep_elevation_ft")
.withColumnRenamed("municipality", "dep_municipality")
.withColumnRenamed("local_code", "dep_local_code")
.withColumnRenamed("name", "dep_name")
)
df_all_joined_dep_arr = (
df_joined_dep.join(
df_airports1, df_joined_dep.DEST == df_airports1.local_code, "inner"
)
.withColumnRenamed("type", "arr_type")
.withColumnRenamed("latitude_deg", "arr_lat")
.withColumnRenamed("longitude_deg", "arr_lon")
.withColumnRenamed("elevation_ft", "arr_elevation_ft")
.withColumnRenamed("municipality", "arr_municipality")
.withColumnRenamed("local_code", "arr_local_code")
.withColumnRenamed("name", "arr_name")
)
df_all_joined_dep_arr = df_all_joined_dep_arr.dropna("any")
df_all_add_2_cols = (
df_all.withColumn(
"dep_delay_int", when(col("DEP_DELAY") <= 0, 0).when(col("DEP_DELAY") > 1, 1)
)
.withColumn(
"arr_delay_int", when(col("ARR_DELAY") <= 0, 0).when(col("ARR_DELAY") > 1, 1)
)
.dropna()
)
df_all_add_2_cols.printSchema()
df_all_1 = df_all_add_2_cols.drop("DEP_DELAY").drop("ARR_DELAY")
display(df_all_1)
# ## Machine Learning - StringIndexer
from pyspark.ml import Pipeline
from pyspark.ml.feature import (
VectorAssembler,
StringIndexer,
VectorIndexer,
MinMaxScaler,
OneHotEncoder,
MaxAbsScaler,
)
from pyspark.ml.classification import LogisticRegression, DecisionTreeClassifier
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from pyspark.ml.evaluation import (
BinaryClassificationEvaluator,
MulticlassClassificationEvaluator,
)
from xgboost.spark import SparkXGBClassifier
pipeline_of_stringindexers = Pipeline(stages=indexers)
model0 = pipeline_of_stringindexers.fit(df_all_1).transform(df_all_1)
model0.printSchema()
display(model0)
model1 = (
model0.drop("OP_UNIQUE_CARRIER")
.drop("ORIGIN")
.drop("ORIGIN_STATE_NM")
.drop("DEST_CITY_MARKET_ID")
.drop("ORIGIN_STATE_ABR")
.drop("dep_municipality")
.drop("ORIGIN_CITY_NAME")
.drop("ORIGIN_AIRPORT_SEQ_ID")
.drop("DEST_AIRPORT_SEQ_ID")
.drop("DEST_STATE_NM")
.drop("DEST_STATE_NM")
.drop("DEP_TIME")
.drop("ARR_TIME")
.drop("arr_municipality")
.drop("DEST")
.drop("DEST_CITY_NAME")
.drop("DEST_STATE_ABRDEST_CITY_NAME")
.drop("Description")
.drop("DEST_STATE_ABR")
.drop("dep_type")
.drop("dep_name")
.drop("arr_type")
.drop("arr_name")
.drop("dep_local_code")
.drop("arr_local_code")
)
model1.printSchema()
display(model1)
# ## Heatmap
import matplotlib.pyplot as plt
import seaborn as sns
model1_pd = model1.toPandas()
fig, ax = plt.subplots(figsize=(40, 10))
sns.heatmap(model1_pd.corr(), annot=True)
feature_columns = [
"DAY_OF_WEEK",
"ORIGIN_AIRPORT_ID",
"DEST_AIRPORT_ID",
"DISTANCE",
"dep_lat",
"dep_lon",
"dep_elevation_ft",
"arr_lat",
"arr_lon",
"arr_elevation_ft",
"_OHE_OP_UNIQUE_CARRIER",
"_OHE_OP_DEST",
"_OHE_OP_ORIGIN",
"_OHE_ORIGIN_CITY_NAME",
"_OHE_DEST_CITY_NAME",
"_OHE_DEST_STATE_ABR",
"_OHE_description",
"_OHE_dep_type",
"_OHE_dep_name",
"_OHE_arr_type",
"_OHE_arr_name",
]
# op1: without using pipeline
# VectorAssembler: to add all the features into a single column
# assembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
# model2 = assembler.transform(model1)
# op2: using pipeline but MinMaxScaler not really working
# vectAssembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
# minMax = MinMaxScaler(inputCol="features", outputCol="normFeatures")
# pipeline = Pipeline(stages=[vectAssembler, minMax])
# model2 = pipeline.fit(model1).transform(model1)
# op3: using pipeline but MinMaxScaler not really working
vectAssembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
minMax = MinMaxScaler(inputCol="features", outputCol="normfeatures")
pipeline = Pipeline(stages=[vectAssembler, minMax])
model1_1 = pipeline.fit(model1).transform(model1)
finalized_data = model1_1.select("normfeatures", "dep_delay_int")
display(finalized_data)
# split data into train and test
train_data, test_data = finalized_data.randomSplit([0.8, 0.2], seed=42)
# ## 1st Set: ML Algorithms - MinMaxScaler
# ## 1st Set: Logistic Regression
# create LogisticRegression model then fit it to training data
train_data_lr = train_data
test_data_lr = test_data
lr = LogisticRegression(labelCol="dep_delay_int", featuresCol="normfeatures")
lr_model = lr.fit(train_data_lr)
# ## 1st Set: Decision Tree Classifier
# create DecisionTreeClassifier model then fit it to training data
train_data_dtc = train_data
test_data_dtc = test_data
dtc = DecisionTreeClassifier(labelCol="dep_delay_int", featuresCol="normfeatures")
dtc_model = dtc.fit(train_data_dtc)
# ## 1st Set: XGBoost
# create XGBoost model then fit it to training data
train_data_xgb = train_data
test_data_xgb = test_data
xgb = SparkXGBClassifier(
features_col="normfeatures", label_col="dep_delay_int", num_workers=2
)
xgb_model = xgb.fit(train_data_xgb)
# Evaluations
# Eval-Logistric Regression
predictions_df_lr = lr_model.transform(train_data_lr)
predictions_df_lr = (
predictions_df_lr.withColumnRenamed("prediction", "prediction_lr")
.withColumnRenamed("dep_delay_int", "dep_delay_int_lr")
.withColumnRenamed("rawPrediction", "rawPrediction_lr")
.withColumnRenamed("probability", "probability_lr")
)
predictions_df_lr.select(
"rawPrediction_lr", "probability_lr", "prediction_lr", "dep_delay_int_lr"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_lr = float(predictions_df_lr.filter("prediction_lr == 1.0 AND dep_delay_int_lr == 1").count())
# fp_lr = float(predictions_df_lr.filter("prediction_lr == 1.0 AND dep_delay_int_lr == 0").count())
# tn_lr = float(predictions_df_lr.filter("prediction_lr == 0.0 AND dep_delay_int_lr == 0").count())
# fn_lr = float(predictions_df_lr.filter("prediction_lr == 0.0 AND dep_delay_int_lr == 1").count())
# pr_lr = tp_lr / (tp_lr + fp_lr)
# re_lr = tp_lr / (tp_lr + fn_lr)
# metrics = spark.createDataFrame([
# ("TP", tp_lr),
# ("FP", fp_lr),
# ("TN", tn_lr),
# ("FN", fn_lr),
# ("Precision", pr_lr),
# ("Recall", re_lr),
# ("myAccuracy", (tp_lr+tn_lr)/(tp_lr+fp_lr+tn_lr+fn_lr)),
# ("F1", 2*pr_lr*re_lr/(re_lr+pr_lr))],["metric_for_lr1", "value"])
# metrics.show()
evaluator_lr_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr", predictionCol="prediction_lr", metricName="accuracy"
)
lr_accuracy = evaluator_lr_mc_acc.evaluate(predictions_df_lr)
evaluator_lr_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr",
predictionCol="prediction_lr",
metricName="precisionByLabel",
)
lr_precision = evaluator_lr_mc_precision.evaluate(predictions_df_lr)
evaluator_lr_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr",
predictionCol="prediction_lr",
metricName="recallByLabel",
)
lr_recall = evaluator_lr_mc_recall.evaluate(predictions_df_lr)
evaluator_lr_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr", predictionCol="prediction_lr", metricName="f1"
)
lr_f1 = evaluator_lr_mc_f1.evaluate(predictions_df_lr)
# area under ROC
evaluator_lr_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_lr",
rawPredictionCol="prediction_lr",
metricName="areaUnderROC",
)
lr_areaUnderROC = evaluator_lr_bc.evaluate(predictions_df_lr)
# Eval-Decision Tree Classifier
predictions_df_dtc = dtc_model.transform(train_data_dtc)
predictions_df_dtc = (
predictions_df_dtc.withColumnRenamed("prediction", "prediction_dtc")
.withColumnRenamed("dep_delay_int", "dep_delay_int_dtc")
.withColumnRenamed("rawPrediction", "rawPrediction_dtc")
.withColumnRenamed("probability", "probability_dtc")
)
predictions_df_dtc.select(
"rawPrediction_dtc", "probability_dtc", "prediction_dtc", "dep_delay_int_dtc"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_dtc = float(predictions_df_dtc.filter("prediction_dtc == 1.0 AND dep_delay_int_dtc == 1").count())
# fp_dtc = float(predictions_df_dtc.filter("prediction_dtc == 1.0 AND dep_delay_int_dtc == 0").count())
# tn_dtc = float(predictions_df_dtc.filter("prediction_dtc == 0.0 AND dep_delay_int_dtc == 0").count())
# fn_dtc = float(predictions_df_dtc.filter("prediction_dtc == 0.0 AND dep_delay_int_dtc == 1").count())
# pr_dtc = tp_dtc / (tp_dtc + fp_dtc)
# re_dtc = tp_dtc / (tp_dtc + fn_dtc)
# metrics = spark.createDataFrame([
# ("TP", tp_dtc),
# ("FP", fp_dtc),
# ("TN", tn_dtc),
# ("FN", fn_dtc),
# ("Precision", pr_dtc),
# ("Recall", re_dtc),
# ("myAccuracy", (tp_dtc+tn_dtc)/(tp_dtc+fp_dtc+tn_dtc+fn_dtc)),
# ("F1", 2*pr_dtc*re_dtc/(re_dtc+pr_dtc))],["metric_for_dtc1", "value"])
# metrics.show()
evaluator_dtc_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc", predictionCol="prediction_dtc", metricName="accuracy"
)
dtc_accuracy = evaluator_dtc_mc_acc.evaluate(predictions_df_dtc)
evaluator_dtc_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc",
predictionCol="prediction_dtc",
metricName="precisionByLabel",
)
dtc_precision = evaluator_dtc_mc_precision.evaluate(predictions_df_dtc)
evaluator_dtc_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc",
predictionCol="prediction_dtc",
metricName="recallByLabel",
)
dtc_recall = evaluator_dtc_mc_recall.evaluate(predictions_df_dtc)
evaluator_dtc_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc", predictionCol="prediction_dtc", metricName="f1"
)
dtc_f1 = evaluator_dtc_mc_f1.evaluate(predictions_df_dtc)
# area under ROC
evaluator_dtc_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_dtc",
rawPredictionCol="prediction_dtc",
metricName="areaUnderROC",
)
dtc_areaUnderROC = evaluator_dtc_bc.evaluate(predictions_df_dtc)
# Eval-XGBoost Classifier
predictions_df_xgb = xgb_model.transform(train_data_xgb)
predictions_df_xgb = (
predictions_df_xgb.withColumnRenamed("prediction", "prediction_xgb")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb")
.withColumnRenamed("probability", "probability_xgb")
)
predictions_df_xgb.select(
"rawPrediction_xgb", "probability_xgb", "prediction_xgb", "dep_delay_int_xgb"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb = float(predictions_df_xgb.filter("prediction_xgb == 1.0 AND dep_delay_int_xgb == 1").count())
# fp_xgb = float(predictions_df_xgb.filter("prediction_xgb == 1.0 AND dep_delay_int_xgb == 0").count())
# tn_xgb = float(predictions_df_xgb.filter("prediction_xgb == 0.0 AND dep_delay_int_xgb == 0").count())
# fn_xgb = float(predictions_df_xgb.filter("prediction_xgb == 0.0 AND dep_delay_int_xgb == 1").count())
# pr_xgb = tp_xgb / (tp_xgb + fp_xgb)
# re_xgb = tp_xgb / (tp_xgb + fn_xgb)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb),
# ("FP", fp_xgb),
# ("TN", tn_xgb),
# ("FN", fn_xgb),
# ("Precision", pr_xgb),
# ("Recall", re_xgb),
# ("myAccuracy", (tp_xgb+tn_xgb)/(tp_xgb+fp_xgb+tn_xgb+fn_xgb)),
# ("F1", 2*pr_xgb*re_xgb/(re_xgb+pr_xgb))],["metric_for_xgb1", "value"])
# metrics.show()
evaluator_xgb_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb", predictionCol="prediction_xgb", metricName="accuracy"
)
xgb_accuracy = evaluator_xgb_mc_acc.evaluate(predictions_df_xgb)
evaluator_xgb_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb",
predictionCol="prediction_xgb",
metricName="precisionByLabel",
)
xgb_precision = evaluator_xgb_mc_precision.evaluate(predictions_df_xgb)
evaluator_xgb_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb",
predictionCol="prediction_xgb",
metricName="recallByLabel",
)
xgb_recall = evaluator_xgb_mc_recall.evaluate(predictions_df_xgb)
evaluator_xgb_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb", predictionCol="prediction_xgb", metricName="f1"
)
xgb_f1 = evaluator_xgb_mc_f1.evaluate(predictions_df_xgb)
# area under ROC
evaluator_xgb_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb",
rawPredictionCol="prediction_xgb",
metricName="areaUnderROC",
)
xgb_areaUnderROC = evaluator_xgb_bc.evaluate(predictions_df_xgb)
# ## 1st Set: Metrics
print(f"LR1 Accuracy: {lr_accuracy}")
print(f"LR1 Precision: {lr_precision}")
print(f"LR1 Recall: {lr_recall}")
print(f"LR1 F1: {lr_f1}")
print(f"LR1 AreaUnderROC: {lr_areaUnderROC}")
print(f"DTC1 Accuracy: {dtc_accuracy}")
print(f"DTC1 Precision: {dtc_precision}")
print(f"DTC1 Recall: {dtc_recall}")
print(f"DTC1 F1: {dtc_f1}")
print(f"DTC1 AreaUnderROC: {dtc_areaUnderROC}")
print(f"XGB1 Accuracy: {xgb_accuracy}")
print(f"XGB1 Precision: {xgb_precision}")
print(f"XGB1 Recall: {xgb_recall}")
print(f"XGB1 F1: {xgb_f1}")
print(f"XGB1 AreaUnderROC: {xgb_areaUnderROC}")
# ## 2nd Set: ML Algorithms - Added OneHotEncoder
display(model1)
from pyspark.ml.feature import OneHotEncoder
model2 = model1
model2_pd = model2.toPandas()
display(model2_pd)
# onehotencoder for _OHE_OP_UNIQUE_CARRIER column
encoder = OneHotEncoder(inputCol="_OHE_OP_UNIQUE_CARRIER", outputCol="carrier_onehot")
encoded_df = encoder.fit(model2).transform(model2)
display(encoded_df)
from pyspark.ml.functions import vector_to_array
df_col_onehot = encoded_df.select(
"*", vector_to_array("carrier_onehot").alias("col_onehot")
)
display(df_col_onehot)
num_categories = len(df_col_onehot.first()["col_onehot"]) # 3
display(df_col_onehot.first())
num_categories = len(df_col_onehot.first()["col_onehot"]) # 3
cols_expanded = [(col("col_onehot")[i]) for i in range(num_categories)]
df_cols_onehot2 = df_col_onehot.select("*", *cols_expanded)
display(df_cols_onehot2)
# onehotencoder for day_of_week
encoder_dayofweek = OneHotEncoder(
inputCol="DAY_OF_WEEK", outputCol="day_of_week_onehot"
)
encoded_df_dayofweek = encoder_dayofweek.fit(df_cols_onehot2).transform(df_cols_onehot2)
df_col_onehot_dayofweek = encoded_df_dayofweek.select(
"*", vector_to_array("day_of_week_onehot").alias("col_onehot_day_of_week")
)
num_categories_dayofweek = len(
df_col_onehot_dayofweek.first()["col_onehot_day_of_week"]
) # 3
cols_expanded_dayofweek = [
(col("col_onehot_day_of_week")[i]) for i in range(num_categories_dayofweek)
]
df_cols_onehot_day_of_week = df_col_onehot_dayofweek.select(
"*", *cols_expanded_dayofweek
)
display(df_cols_onehot_day_of_week)
vectAssembler2 = VectorAssembler(inputCols=feature_columns2, outputCol="features2")
minMax2 = MinMaxScaler(inputCol="features2", outputCol="normfeatures2")
pipeline2 = Pipeline(stages=[vectAssembler2, minMax2])
model2 = pipeline2.fit(df_cols_onehot_day_of_week).transform(df_cols_onehot_day_of_week)
display(model2)
finalized_data2 = model2.select(
"carrier_onehot", "day_of_week_onehot", "normfeatures2", "dep_delay_int"
)
display(finalized_data2)
train_data2, test_data2 = finalized_data2.randomSplit([0.8, 0.2], seed=42)
# ## 2nd Set: Logistic Regression
train_data_lr2 = train_data2
test_data_lr2 = test_data2
lr2 = LogisticRegression(labelCol="dep_delay_int", featuresCol="normfeatures2")
lr2_model = lr2.fit(train_data_lr2)
predictions_df_lr2 = lr2_model.transform(train_data_lr2)
predictions_df_lr2 = (
predictions_df_lr2.withColumnRenamed("prediction", "prediction_lr2")
.withColumnRenamed("dep_delay_int", "dep_delay_int_lr2")
.withColumnRenamed("rawPrediction", "rawPrediction_lr2")
.withColumnRenamed("probability", "probability_lr2")
)
predictions_df_lr2.select(
"rawPrediction_lr2", "probability_lr2", "prediction_lr2", "dep_delay_int_lr2"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 1.0 AND dep_delay_int_lr2 == 1").count())
# fp_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 1.0 AND dep_delay_int_lr2 == 0").count())
# tn_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 0.0 AND dep_delay_int_lr2 == 0").count())
# fn_lr2 = float(predictions_df_lr2.filter("prediction_lr2 == 0.0 AND dep_delay_int_lr2 == 1").count())
# pr_lr2 = tp_lr2 / (tp_lr2 + fp_lr2)
# re_lr2 = tp_lr2 / (tp_lr2 + fn_lr2)
# metrics = spark.createDataFrame([
# ("TP", tp_lr2),
# ("FP", fp_lr2),
# ("TN", tn_lr2),
# ("FN", fn_lr2),
# ("Precision", pr_lr2),
# ("Recall", re_lr2),
# ("myAccuracy", (tp_lr2+tn_lr2)/(tp_lr2+fp_lr2+tn_lr2+fn_lr2)),
# ("F1", 2*pr_lr2*re_lr2/(re_lr2+pr_lr2))],["metric_for_lr2", "value"])
# metrics.show()
evaluator_lr2_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2", predictionCol="prediction_lr2", metricName="accuracy"
)
lr2_accuracy = evaluator_lr2_mc_acc.evaluate(predictions_df_lr2)
evaluator_lr2_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2",
predictionCol="prediction_lr2",
metricName="precisionByLabel",
)
lr2_precision = evaluator_lr2_mc_precision.evaluate(predictions_df_lr2)
evaluator_lr2_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2",
predictionCol="prediction_lr2",
metricName="recallByLabel",
)
lr2_recall = evaluator_lr2_mc_recall.evaluate(predictions_df_lr2)
evaluator_lr2_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr2", predictionCol="prediction_lr2", metricName="f1"
)
lr2_f1 = evaluator_lr2_mc_f1.evaluate(predictions_df_lr2)
# area under ROC
evaluator_lr2_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_lr2",
rawPredictionCol="prediction_lr2",
metricName="areaUnderROC",
)
lr2_areaUnderROC = evaluator_lr2_bc.evaluate(predictions_df_lr2)
# ## 2nd Set: Decision Tree Classifier
# create DecisionTreeClassifier model then fit it to training data
train_data_dtc2 = train_data2
test_data_dtc2 = test_data2
dtc2 = DecisionTreeClassifier(labelCol="dep_delay_int", featuresCol="normfeatures2")
dtc_model2 = dtc2.fit(train_data_dtc2)
predictions_df_dtc2 = dtc_model2.transform(train_data_dtc2)
predictions_df_dtc2 = (
predictions_df_dtc2.withColumnRenamed("prediction", "prediction_dtc2")
.withColumnRenamed("dep_delay_int", "dep_delay_int_dtc2")
.withColumnRenamed("rawPrediction", "rawPrediction_dtc2")
.withColumnRenamed("probability", "probability_dtc2")
)
predictions_df_dtc2.select(
"rawPrediction_dtc2", "probability_dtc2", "prediction_dtc2", "dep_delay_int_dtc2"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 1.0 AND dep_delay_int_dtc2 == 1").count())
# fp_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 1.0 AND dep_delay_int_dtc2 == 0").count())
# tn_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 0.0 AND dep_delay_int_dtc2 == 0").count())
# fn_dtc2 = float(predictions_df_dtc2.filter("prediction_dtc2 == 0.0 AND dep_delay_int_dtc2 == 1").count())
# pr_dtc2 = tp_dtc2 / (tp_dtc2 + fp_dtc2)
# re_dtc2 = tp_dtc2 / (tp_dtc2 + fn_dtc2)
# metrics = spark.createDataFrame([
# ("TP", tp_dtc2),
# ("FP", fp_dtc2),
# ("TN", tn_dtc2),
# ("FN", fn_dtc2),
# ("Precision", pr_dtc2),
# ("Recall", re_dtc2),
# ("myAccuracy", (tp_dtc2+tn_dtc2)/(tp_dtc2+fp_dtc2+tn_dtc2+fn_dtc2)),
# ("F1", 2*pr_dtc2*re_dtc2/(re_dtc2+pr_dtc2))],["metric_for_dtc2", "value"])
# metrics.show()
evaluator_dtc2_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
predictionCol="prediction_dtc2",
metricName="accuracy",
)
dtc2_accuracy = evaluator_dtc2_mc_acc.evaluate(predictions_df_dtc2)
evaluator_dtc2_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
predictionCol="prediction_dtc2",
metricName="precisionByLabel",
)
dtc2_precision = evaluator_dtc2_mc_precision.evaluate(predictions_df_dtc2)
evaluator_dtc2_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
predictionCol="prediction_dtc2",
metricName="recallByLabel",
)
dtc2_recall = evaluator_dtc2_mc_recall.evaluate(predictions_df_dtc2)
evaluator_dtc2_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc2", predictionCol="prediction_dtc2", metricName="f1"
)
dtc2_f1 = evaluator_dtc2_mc_f1.evaluate(predictions_df_dtc2)
# area under ROC
evaluator_dtc2_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_dtc2",
rawPredictionCol="prediction_dtc2",
metricName="areaUnderROC",
)
dtc2_areaUnderROC = evaluator_dtc2_bc.evaluate(predictions_df_dtc2)
# ## 2nd Set: XGBoost
train_data_xgb2 = train_data2
test_data_xgb2 = test_data2
from xgboost.spark import SparkXGBClassifier
xgb2 = SparkXGBClassifier(
features_col="normfeatures2", label_col="dep_delay_int", num_workers=2
)
xgb_model2 = xgb2.fit(train_data_xgb2)
predictions_df_xgb2 = xgb_model2.transform(train_data_xgb2)
predictions_df_xgb2 = (
predictions_df_xgb2.withColumnRenamed("prediction", "prediction_xgb2")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb2")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb2")
.withColumnRenamed("probability", "probability_xgb2")
)
predictions_df_xgb2.select(
"rawPrediction_xgb2", "probability_xgb2", "prediction_xgb2", "dep_delay_int_xgb2"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 1.0 AND dep_delay_int_xgb2 == 1").count())
# fp_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 1.0 AND dep_delay_int_xgb2 == 0").count())
# tn_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 0.0 AND dep_delay_int_xgb2 == 0").count())
# fn_xgb2 = float(predictions_df_xgb2.filter("prediction_xgb2 == 0.0 AND dep_delay_int_xgb2 == 1").count())
# pr_xgb2 = tp_xgb2 / (tp_xgb2 + fp_xgb2)
# re_xgb2 = tp_xgb2 / (tp_xgb2 + fn_xgb2)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb2),
# ("FP", fp_xgb2),
# ("TN", tn_xgb2),
# ("FN", fn_xgb2),
# ("Precision", pr_xgb2),
# ("Recall", re_xgb2),
# ("myAccuracy", (tp_xgb2+tn_xgb2)/(tp_xgb2+fp_xgb2+tn_xgb2+fn_xgb2)),
# ("F1", 2*pr_xgb2*re_xgb2/(re_xgb2+pr_xgb2))],["metric_for_xgb2", "value"])
# metrics.show()
evaluator_xgb2_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
predictionCol="prediction_xgb2",
metricName="accuracy",
)
xgb2_accuracy = evaluator_xgb2_mc_acc.evaluate(predictions_df_xgb2)
evaluator_xgb2_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
predictionCol="prediction_xgb2",
metricName="precisionByLabel",
)
xgb2_precision = evaluator_xgb2_mc_precision.evaluate(predictions_df_xgb2)
evaluator_xgb2_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
predictionCol="prediction_xgb2",
metricName="recallByLabel",
)
xgb2_recall = evaluator_xgb2_mc_recall.evaluate(predictions_df_xgb2)
evaluator_xgb2_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2", predictionCol="prediction_xgb2", metricName="f1"
)
xgb2_f1 = evaluator_xgb2_mc_f1.evaluate(predictions_df_xgb2)
# area under ROC
evaluator_xgb2_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb2",
rawPredictionCol="prediction_xgb2",
metricName="areaUnderROC",
)
xgb2_areaUnderROC = evaluator_xgb2_bc.evaluate(predictions_df_xgb2)
# ## 2nd Set: Metrics
print(f"LR2 Accuracy: {lr2_accuracy}")
print(f"LR2 Precision: {lr2_precision}")
print(f"LR2 Recall: {lr2_recall}")
print(f"LR2 F1: {lr2_f1}")
print(f"LR2 AreaUnderROC: {lr2_areaUnderROC}")
print(f"DTC2 Accuracy: {dtc2_accuracy}")
print(f"DTC2 Precision: {dtc2_precision}")
print(f"DTC2 Recall: {dtc2_recall}")
print(f"DTC2 F1: {dtc2_f1}")
print(f"DTC2 AreaUnderROC: {dtc2_areaUnderROC}")
print(f"XGB2 Accuracy: {xgb2_accuracy}")
print(f"XGB2 Precision: {xgb2_precision}")
print(f"XGB2 Recall: {xgb2_recall}")
print(f"XGB2 F1: {xgb2_f1}")
print(f"XGB2 AreaUnderROC: {xgb2_areaUnderROC}")
# ## 3rd Set: ML Algorithms - Switched to MaxAbsScaler
df_v3 = df_cols_onehot_day_of_week
vectAssembler3 = VectorAssembler(inputCols=feature_columns3, outputCol="features3")
maxAbs = MaxAbsScaler(inputCol="features3", outputCol="normfeatures3")
pipeline3 = Pipeline(stages=[vectAssembler3, maxAbs])
model3 = pipeline3.fit(df_v3).transform(df_v3)
finalized_data3 = model3.select("normfeatures3", "dep_delay_int")
display(finalized_data3)
train_data3, test_data3 = finalized_data3.randomSplit([0.8, 0.2], seed=42)
# ## 3rd Set: Logistic Regression
train_data_lr3 = train_data3
test_data_lr3 = test_data3
lr3 = LogisticRegression(labelCol="dep_delay_int", featuresCol="normfeatures3")
lr3_model = lr3.fit(train_data_lr3)
predictions_df_lr3 = lr3_model.transform(train_data_lr3)
predictions_df_lr3 = (
predictions_df_lr3.withColumnRenamed("prediction", "prediction_lr3")
.withColumnRenamed("dep_delay_int", "dep_delay_int_lr3")
.withColumnRenamed("rawPrediction", "rawPrediction_lr3")
.withColumnRenamed("probability", "probability_lr3")
)
predictions_df_lr3.select(
"rawPrediction_lr3", "probability_lr3", "prediction_lr3", "dep_delay_int_lr3"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 1.0 AND dep_delay_int_lr3 == 1").count())
# fp_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 1.0 AND dep_delay_int_lr3 == 0").count())
# tn_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 0.0 AND dep_delay_int_lr3 == 0").count())
# fn_lr3 = float(predictions_df_lr3.filter("prediction_lr3 == 0.0 AND dep_delay_int_lr3 == 1").count())
# pr_lr3 = tp_lr3 / (tp_lr3 + fp_lr3)
# re_lr3 = tp_lr3 / (tp_lr3 + fn_lr3)
# metrics = spark.createDataFrame([
# ("TP", tp_lr3),
# ("FP", fp_lr3),
# ("TN", tn_lr3),
# ("FN", fn_lr3),
# ("Precision", pr_lr3),
# ("Recall", re_lr3),
# ("myAccuracy", (tp_lr3+tn_lr3)/(tp_lr3+fp_lr3+tn_lr3+fn_lr3)),
# ("F1", 2*pr_lr3*re_lr3/(re_lr3+pr_lr3))],["metric_for_lr3", "value"])
# metrics.show()
evaluator_lr3_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3", predictionCol="prediction_lr3", metricName="accuracy"
)
lr3_accuracy = evaluator_lr3_mc_acc.evaluate(predictions_df_lr3)
evaluator_lr3_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3",
predictionCol="prediction_lr3",
metricName="precisionByLabel",
)
lr3_precision = evaluator_lr3_mc_precision.evaluate(predictions_df_lr3)
evaluator_lr3_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3",
predictionCol="prediction_lr3",
metricName="recallByLabel",
)
lr3_recall = evaluator_lr3_mc_recall.evaluate(predictions_df_lr3)
evaluator_lr3_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_lr3", predictionCol="prediction_lr3", metricName="f1"
)
lr3_f1 = evaluator_lr3_mc_f1.evaluate(predictions_df_lr3)
# area under ROC
evaluator_lr3_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_lr3",
rawPredictionCol="prediction_lr3",
metricName="areaUnderROC",
)
lr3_areaUnderROC = evaluator_lr3_bc.evaluate(predictions_df_lr3)
# ## 3rd Set: Decision Tree Classifier
train_data_dtc3 = train_data3
test_data_dtc3 = test_data3
dtc3 = DecisionTreeClassifier(labelCol="dep_delay_int", featuresCol="normfeatures3")
dtc_model3 = dtc3.fit(train_data_dtc3)
predictions_df_dtc3 = dtc_model3.transform(train_data_dtc3)
predictions_df_dtc3 = (
predictions_df_dtc3.withColumnRenamed("prediction", "prediction_dtc3")
.withColumnRenamed("dep_delay_int", "dep_delay_int_dtc3")
.withColumnRenamed("rawPrediction", "rawPrediction_dtc3")
.withColumnRenamed("probability", "probability_dtc3")
)
predictions_df_dtc3.select(
"rawPrediction_dtc3", "probability_dtc3", "prediction_dtc3", "dep_delay_int_dtc3"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 1.0 AND dep_delay_int_dtc3 == 1").count())
# fp_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 1.0 AND dep_delay_int_dtc3 == 0").count())
# tn_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 0.0 AND dep_delay_int_dtc3 == 0").count())
# fn_dtc3 = float(predictions_df_dtc3.filter("prediction_dtc3 == 0.0 AND dep_delay_int_dtc3 == 1").count())
# pr_dtc3 = tp_dtc3 / (tp_dtc3 + fp_dtc3)
# re_dtc3 = tp_dtc3 / (tp_dtc3 + fn_dtc3)
# metrics = spark.createDataFrame([
# ("TP", tp_dtc3),
# ("FP", fp_dtc3),
# ("TN", tn_dtc3),
# ("FN", fn_dtc3),
# ("Precision", pr_dtc3),
# ("Recall", re_dtc3),
# ("myAccuracy", (tp_dtc3+tn_dtc3)/(tp_dtc3+fp_dtc3+tn_dtc3+fn_dtc3)),
# ("F1", 2*pr_dtc3*re_dtc3/(re_dtc3+pr_dtc3))],["metric_for_dtc3", "value"])
# metrics.show()
evaluator_dtc3_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
predictionCol="prediction_dtc3",
metricName="accuracy",
)
dtc3_accuracy = evaluator_dtc3_mc_acc.evaluate(predictions_df_dtc3)
evaluator_dtc3_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
predictionCol="prediction_dtc3",
metricName="precisionByLabel",
)
dtc3_precision = evaluator_dtc3_mc_precision.evaluate(predictions_df_dtc3)
evaluator_dtc3_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
predictionCol="prediction_dtc3",
metricName="recallByLabel",
)
dtc3_recall = evaluator_dtc3_mc_recall.evaluate(predictions_df_dtc3)
evaluator_dtc3_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_dtc3", predictionCol="prediction_dtc3", metricName="f1"
)
dtc3_f1 = evaluator_dtc3_mc_f1.evaluate(predictions_df_dtc3)
# area under ROC
evaluator_dtc3_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_dtc3",
rawPredictionCol="prediction_dtc3",
metricName="areaUnderROC",
)
dtc3_areaUnderROC = evaluator_dtc3_bc.evaluate(predictions_df_dtc3)
# ## 3rd Set: XGBoost
train_data_xgb3 = train_data3
test_data_xgb3 = test_data3
from xgboost.spark import SparkXGBClassifier
xgb3 = SparkXGBClassifier(
features_col="normfeatures3", label_col="dep_delay_int", num_workers=2
)
xgb_model3 = xgb3.fit(train_data_xgb3)
predictions_df_xgb3 = xgb_model3.transform(train_data_xgb3)
predictions_df_xgb3 = (
predictions_df_xgb3.withColumnRenamed("prediction", "prediction_xgb3")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb3")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb3")
.withColumnRenamed("probability", "probability_xgb3")
)
predictions_df_xgb3.select(
"rawPrediction_xgb3", "probability_xgb3", "prediction_xgb3", "dep_delay_int_xgb3"
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 1.0 AND dep_delay_int_xgb3 == 1").count())
# fp_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 1.0 AND dep_delay_int_xgb3 == 0").count())
# tn_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 0.0 AND dep_delay_int_xgb3 == 0").count())
# fn_xgb3 = float(predictions_df_xgb3.filter("prediction_xgb3 == 0.0 AND dep_delay_int_xgb3 == 1").count())
# pr_xgb3 = tp_dtc3 / (tp_xgb3 + fp_xgb3)
# re_xgb3 = tp_dtc3 / (tp_xgb3 + fn_xgb3)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb3),
# ("FP", fp_xgb3),
# ("TN", tn_xgb3),
# ("FN", fn_xgb3),
# ("Precision", pr_xgb3),
# ("Recall", re_xgb3),
# ("myAccuracy", (tp_xgb3+tn_xgb3)/(tp_xgb3+fp_xgb3+tn_xgb3+fn_xgb3)),
# ("F1", 2*pr_xgb3*re_xgb3/(re_xgb3+pr_xgb3))],["metric_for_xgb3", "value"])
# metrics.show()
evaluator_xgb3_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
predictionCol="prediction_xgb3",
metricName="accuracy",
)
xgb3_accuracy = evaluator_xgb3_mc_acc.evaluate(predictions_df_xgb3)
evaluator_xgb3_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
predictionCol="prediction_xgb3",
metricName="precisionByLabel",
)
xgb3_precision = evaluator_xgb3_mc_precision.evaluate(predictions_df_xgb3)
evaluator_xgb3_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
predictionCol="prediction_xgb3",
metricName="recallByLabel",
)
xgb3_recall = evaluator_xgb3_mc_recall.evaluate(predictions_df_xgb3)
evaluator_xgb3_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb3", predictionCol="prediction_xgb3", metricName="f1"
)
xgb3_f1 = evaluator_xgb3_mc_f1.evaluate(predictions_df_xgb3)
# area under ROC
evaluator_xgb3_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb3",
rawPredictionCol="prediction_xgb3",
metricName="areaUnderROC",
)
xgb3_areaUnderROC = evaluator_xgb3_bc.evaluate(predictions_df_xgb3)
# ## 3rd Set: Metrics
print(f"LR3 Accuracy: {lr3_accuracy}")
print(f"LR3 Precision: {lr3_precision}")
print(f"LR3 Recall: {lr3_recall}")
print(f"LR3 F1: {lr3_f1}")
print(f"LR3 AreaUnderROC: {lr3_areaUnderROC}")
print(f"DTC3 Accuracy: {dtc3_accuracy}")
print(f"DTC3 Precision: {dtc3_precision}")
print(f"DTC3 Recall: {dtc3_recall}")
print(f"DTC3 F1: {dtc3_f1}")
print(f"DTC3 AreaUnderROC: {dtc3_areaUnderROC}")
print(f"XGB3 Accuracy: {xgb3_accuracy}")
print(f"XGB3 Precision: {xgb3_precision}")
print(f"XGB3 Recall: {xgb3_recall}")
print(f"XGB3 F1: {xgb3_f1}")
print(f"XGB3 AreaUnderROC: {xgb3_areaUnderROC}")
# ## Analysis Comparing Metrics
# ##### 3 sets of Logistic Regression, Decision Tree, and XGBoost were ran.
# ##### The first set used StringIndexer, VectAssembler, and MinMaxScaler.
# ##### The second set used StringIndexer, OneHotEncoder, VectAssembler, and MinMaxScaler.
# ##### The third set used StringIndexer, OneHotEncoder, VectAssembler, and MaxAbsScaler.
# ##### Based on the metrics above, the second set performed the best and within that set, XGBoost performed the best. Accuracy: 0.6685, Precision: 0.6799, Recall: 0.8955, F1: 0.6297, and Area Under ROC: 0.5885. The scores could be better. This means that the features in this dataset are not sufficient enough in teaching the network how to predict whether or not a flight would be delayed. Additional features such as weather conditions would be beneficial to have as features and should help increase those scores.
# ## Feature Importance
print(feature_columns2)
xgb_model2.get_booster().feature_names = feature_columns2
important_features = xgb_model2.get_booster().get_score(importance_type="gain")
display(important_features)
i_f_sorted = {
k: v
for k, v in sorted(
important_features.items(), key=lambda item: item[1], reverse=True
)
}
print(i_f_sorted)
import pandas as pd
i_f_df = pd.DataFrame(
{"Features": i_f_sorted.keys(), "Importance": i_f_sorted.values()}
)
display(i_f_df)
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 15))
plt.barh(i_f_df.Features, i_f_df.Importance)
plt.xlabel("Importance")
plt.ylabel("Feature")
plt.legend(["Score"])
plt.title("Feature Importance")
plt.tight_layout
plt.show()
# ## Analysis
# ##### The feature with the most impact in helping to predict whether or not a flight would be delayed is the airline carrier (represented as col_onehot[n], with n representing a specific airline carrier). Based on the plot, the top four highest features are all related to airline carriers. This is followed by the departure name, meaning certain departing airports are more prone to have delays. This is probably related to how busy and how much traffic that airport typically has. The next feature of highest significance would be col_onehot_day_of_week[3], which represents Wednesday, followed by Saturday.
#
predictions_df_xgb2_test = xgb_model2.transform(test_data_xgb2)
predictions_df_xgb2_test_columns_renamed = (
predictions_df_xgb2_test.withColumnRenamed("prediction", "prediction_xgb2_test")
.withColumnRenamed("dep_delay_int", "dep_delay_int_xgb2_test")
.withColumnRenamed("rawPrediction", "rawPrediction_xgb2_test")
.withColumnRenamed("probability", "probability_xgb2_test")
)
predictions_df_xgb2_test_columns_renamed.select(
"rawPrediction_xgb2_test",
"probability_xgb2_test",
"prediction_xgb2_test",
"dep_delay_int_xgb2_test",
).show(100)
# ok in DataBricks but timed out in Kaggle
# tp_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 1.0 AND dep_delay_int_xgb2_test == 1").count())
# fp_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 1.0 AND dep_delay_int_xgb2_test == 0").count())
# tn_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 0.0 AND dep_delay_int_xgb2_test == 0").count())
# fn_xgb2_test = float(predictions_df_xgb2_test_columns_renamed.filter("prediction_xgb2_test == 0.0 AND dep_delay_int_xgb2_test == 1").count())
# pr_xgb2_test = tp_xgb2_test / (tp_xgb2_test + fp_xgb2_test)
# re_xgb2_test = tp_xgb2_test / (tp_xgb2_test + fn_xgb2_test)
# metrics = spark.createDataFrame([
# ("TP", tp_xgb2_test),
# ("FP", fp_xgb2_test),
# ("TN", tn_xgb2_test),
# ("FN", fn_xgb2_test),
# ("Precision", pr_xgb2_test),
# ("Recall", re_xgb2_test),
# ("myAccuracy", (tp_xgb2_test+tn_xgb2_test)/(tp_xgb2_test+fp_xgb2_test+tn_xgb2_test+fn_xgb2_test)),
# ("F1", 2*pr_xgb2_test*re_xgb2_test/(re_xgb2_test+pr_xgb2_test))],["metric", "value"])
# metrics.show()
evaluator_xgbtest_mc_acc = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="accuracy",
)
xgbtest_accuracy = evaluator_xgbtest_mc_acc.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest Accuracy: {xgbtest_accuracy}")
evaluator_xgbtest_mc_precision = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="precisionByLabel",
)
xgbtest_precision = evaluator_xgbtest_mc_precision.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest Precision: {xgbtest_precision}")
evaluator_xgbtest_mc_recall = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="recallByLabel",
)
xgbtest_recall = evaluator_xgbtest_mc_recall.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest Recall: {xgbtest_recall}")
evaluator_xgbtest_mc_f1 = MulticlassClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
predictionCol="prediction_xgb2_test",
metricName="f1",
)
xgbtest_f1 = evaluator_xgbtest_mc_f1.evaluate(predictions_df_xgb2_test_columns_renamed)
print(f"XGBTest F1: {xgbtest_f1}")
# area under ROC
evaluator_xgbtest_bc = BinaryClassificationEvaluator(
labelCol="dep_delay_int_xgb2_test",
rawPredictionCol="prediction_xgb2_test",
metricName="areaUnderROC",
)
xgbtest_areaUnderROC = evaluator_xgbtest_bc.evaluate(
predictions_df_xgb2_test_columns_renamed
)
print(f"XGBTest AreaUnderROC: {xgbtest_areaUnderROC}")
| false | 0 | 14,877 | 0 | 14,877 | 14,877 |
||
129097338
|
# import libraries
from random import randint, seed
import pandas as pd
import numpy as np
import plotly.express as px
seed(10)
my_data = pd.read_csv("/kaggle/input/titanic/Titanic.tsv", sep="\t")
my_data.info()
my_data.head(11)
# Outliers
# Outliers are rare values that are usually very different from other observations. Outliers can be important for data analysis, but they are usually excluded from the dataset because they can affect the analysis results.
print(my_data.describe())
my_data["Fare"] = pd.to_numeric(my_data["Fare"], errors="coerce")
# find out the outliers in the Fare variable
q1 = my_data["Fare"].quantile(0.25)
q3 = my_data["Fare"].quantile(0.75)
iqr = q3 - q1
upper_bound = q3 + 1.5 * iqr
outliers = my_data[my_data["Fare"] > upper_bound]
print(outliers)
# Handling duplicates
my_data.drop_duplicates(keep=False, inplace=True)
my_data.shape
# Handling missing data, NaNs, Blanks (missing values)
my_data.isna().sum()
my_data = my_data.dropna() # delete rows with missing data
my_data.isna().sum()
my_data.shape
# Wrong/improper values
my_data["Age"].value_counts()
# Convert incorrectly formatted age values to float
my_data["Age"] = my_data["Age"].apply(
lambda x: float(x.replace(".", "")) if isinstance(x, str) and "." in x else x
)
my_data["Age"].value_counts()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097338.ipynb
| null | null |
[{"Id": 129097338, "ScriptId": 38204470, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14627695, "CreationDate": "05/11/2023 02:01:00", "VersionNumber": 10.0, "Title": "Data Cleaning and Preparation", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 57.0, "LinesInsertedFromPrevious": 15.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 42.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# import libraries
from random import randint, seed
import pandas as pd
import numpy as np
import plotly.express as px
seed(10)
my_data = pd.read_csv("/kaggle/input/titanic/Titanic.tsv", sep="\t")
my_data.info()
my_data.head(11)
# Outliers
# Outliers are rare values that are usually very different from other observations. Outliers can be important for data analysis, but they are usually excluded from the dataset because they can affect the analysis results.
print(my_data.describe())
my_data["Fare"] = pd.to_numeric(my_data["Fare"], errors="coerce")
# find out the outliers in the Fare variable
q1 = my_data["Fare"].quantile(0.25)
q3 = my_data["Fare"].quantile(0.75)
iqr = q3 - q1
upper_bound = q3 + 1.5 * iqr
outliers = my_data[my_data["Fare"] > upper_bound]
print(outliers)
# Handling duplicates
my_data.drop_duplicates(keep=False, inplace=True)
my_data.shape
# Handling missing data, NaNs, Blanks (missing values)
my_data.isna().sum()
my_data = my_data.dropna() # delete rows with missing data
my_data.isna().sum()
my_data.shape
# Wrong/improper values
my_data["Age"].value_counts()
# Convert incorrectly formatted age values to float
my_data["Age"] = my_data["Age"].apply(
lambda x: float(x.replace(".", "")) if isinstance(x, str) and "." in x else x
)
my_data["Age"].value_counts()
| false | 0 | 434 | 0 | 434 | 434 |
||
129097494
|
<jupyter_start><jupyter_text>Credit Card Dataset for Clustering
This case requires to develop a customer segmentation to define marketing strategy. The
sample Dataset summarizes the usage behavior of about 9000 active credit card holders during the last 6 months. The file is at a customer level with 18 behavioral variables.
Following is the Data Dictionary for Credit Card dataset :-
**CUST_ID** : Identification of Credit Card holder (Categorical)
**BALANCE** : Balance amount left in their account to make purchases (
**BALANCE_FREQUENCY** : How frequently the Balance is updated, score between 0 and 1 (1 = frequently updated, 0 = not frequently updated)
**PURCHASES** : Amount of purchases made from account
**ONEOFF_PURCHASES** : Maximum purchase amount done in one-go
**INSTALLMENTS_PURCHASES** : Amount of purchase done in installment
**CASH_ADVANCE** : Cash in advance given by the user
**PURCHASES_FREQUENCY** : How frequently the Purchases are being made, score between 0 and 1 (1 = frequently purchased, 0 = not frequently purchased)
**ONEOFFPURCHASESFREQUENCY** : How frequently Purchases are happening in one-go (1 = frequently purchased, 0 = not frequently purchased)
**PURCHASESINSTALLMENTSFREQUENCY** : How frequently purchases in installments are being done (1 = frequently done, 0 = not frequently done)
**CASHADVANCEFREQUENCY** : How frequently the cash in advance being paid
**CASHADVANCETRX** : Number of Transactions made with "Cash in Advanced"
**PURCHASES_TRX** : Numbe of purchase transactions made
**CREDIT_LIMIT** : Limit of Credit Card for user
**PAYMENTS** : Amount of Payment done by user
**MINIMUM_PAYMENTS** : Minimum amount of payments made by user
**PRCFULLPAYMENT** : Percent of full payment paid by user
**TENURE** : Tenure of credit card service for user
Kaggle dataset identifier: ccdata
<jupyter_script># Grupo de trabajo: Briannys Ahiram Páez Monserrate, Daniel Esteban Hurtado Dimas, Ramiro Esteban Bravo Higuera y Juan Camilo Peña Perdomo.
# El siguiente trabajo evidencia el taller propuesto en la matería big data, este trabajo es realizado en lenguaje R.
# # 1. Paquetes y librerías.
install.packages("FactoClass")
install.packages("dplyr")
install.packages("FactoMineR")
install.packages("factoextra")
install.packages("fpc")
rm(list=ls())
library(FactoClass)
library(readxl)
library(readr)
library(dplyr)
library(FactoMineR)
library(Factoshiny)
library(fpc)
library(tidyverse)
library(cluster)
library(factoextra)
library(readr)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/097/129097494.ipynb
|
ccdata
|
arjunbhasin2013
|
[{"Id": 129097494, "ScriptId": 38377830, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13998711, "CreationDate": "05/11/2023 02:03:34", "VersionNumber": 1.0, "Title": "notebook4f2fe97123", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 26.0, "LinesInsertedFromPrevious": 26.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184851850, "KernelVersionId": 129097494, "SourceDatasetVersionId": 19663}]
|
[{"Id": 19663, "DatasetId": 14701, "DatasourceVersionId": 19663, "CreatorUserId": 621533, "LicenseName": "CC0: Public Domain", "CreationDate": "03/02/2018 08:35:03", "VersionNumber": 1.0, "Title": "Credit Card Dataset for Clustering", "Slug": "ccdata", "Subtitle": NaN, "Description": "This case requires to develop a customer segmentation to define marketing strategy. The\nsample Dataset summarizes the usage behavior of about 9000 active credit card holders during the last 6 months. The file is at a customer level with 18 behavioral variables.\n\nFollowing is the Data Dictionary for Credit Card dataset :-\n\n**CUST_ID** : Identification of Credit Card holder (Categorical)\n**BALANCE** : Balance amount left in their account to make purchases (\n**BALANCE_FREQUENCY** : How frequently the Balance is updated, score between 0 and 1 (1 = frequently updated, 0 = not frequently updated)\n**PURCHASES** : Amount of purchases made from account\n**ONEOFF_PURCHASES** : Maximum purchase amount done in one-go\n**INSTALLMENTS_PURCHASES** : Amount of purchase done in installment\n**CASH_ADVANCE** : Cash in advance given by the user\n**PURCHASES_FREQUENCY** : How frequently the Purchases are being made, score between 0 and 1 (1 = frequently purchased, 0 = not frequently purchased)\n**ONEOFFPURCHASESFREQUENCY** : How frequently Purchases are happening in one-go (1 = frequently purchased, 0 = not frequently purchased)\n**PURCHASESINSTALLMENTSFREQUENCY** : How frequently purchases in installments are being done (1 = frequently done, 0 = not frequently done)\n**CASHADVANCEFREQUENCY** : How frequently the cash in advance being paid\n**CASHADVANCETRX** : Number of Transactions made with \"Cash in Advanced\"\n**PURCHASES_TRX** : Numbe of purchase transactions made\n**CREDIT_LIMIT** : Limit of Credit Card for user\n**PAYMENTS** : Amount of Payment done by user\n**MINIMUM_PAYMENTS** : Minimum amount of payments made by user\n**PRCFULLPAYMENT** : Percent of full payment paid by user\n**TENURE** : Tenure of credit card service for user", "VersionNotes": "Initial release", "TotalCompressedBytes": 902879.0, "TotalUncompressedBytes": 902879.0}]
|
[{"Id": 14701, "CreatorUserId": 621533, "OwnerUserId": 621533.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 19663.0, "CurrentDatasourceVersionId": 19663.0, "ForumId": 22292, "Type": 2, "CreationDate": "03/02/2018 08:35:03", "LastActivityDate": "03/02/2018", "TotalViews": 341525, "TotalDownloads": 47528, "TotalVotes": 471, "TotalKernels": 266}]
|
[{"Id": 621533, "UserName": "arjunbhasin2013", "DisplayName": "Arjun Bhasin", "RegisterDate": "05/23/2016", "PerformanceTier": 1}]
|
# Grupo de trabajo: Briannys Ahiram Páez Monserrate, Daniel Esteban Hurtado Dimas, Ramiro Esteban Bravo Higuera y Juan Camilo Peña Perdomo.
# El siguiente trabajo evidencia el taller propuesto en la matería big data, este trabajo es realizado en lenguaje R.
# # 1. Paquetes y librerías.
install.packages("FactoClass")
install.packages("dplyr")
install.packages("FactoMineR")
install.packages("factoextra")
install.packages("fpc")
rm(list=ls())
library(FactoClass)
library(readxl)
library(readr)
library(dplyr)
library(FactoMineR)
library(Factoshiny)
library(fpc)
library(tidyverse)
library(cluster)
library(factoextra)
library(readr)
| false | 0 | 222 | 0 | 694 | 222 |
||
129103334
|
<jupyter_start><jupyter_text>Amazon Sales Dataset
This dataset is having the data of 1K+ Amazon Product's Ratings and Reviews as per their details listed on the official website of Amazon
**Features**
- product_id - Product ID
- product_name - Name of the Product
- category - Category of the Product
- discounted_price - Discounted Price of the Product
- actual_price - Actual Price of the Product
- discount_percentage - Percentage of Discount for the Product
- rating - Rating of the Product
- rating_count - Number of people who voted for the Amazon rating
- about_product - Description about the Product
- user_id - ID of the user who wrote review for the Product
- user_name - Name of the user who wrote review for the Product
- review_id - ID of the user review
- review_title - Short review
- review_content - Long review
- img_link - Image Link of the Product
- product_link - Official Website Link of the Product
**Inspiration**
Amazon is an American Tech Multi-National Company whose business interests include E-commerce, where they buy and store the inventory, and take care of everything from shipping and pricing to customer service and returns. I've created this dataset so that people can play with this dataset and do a lot of things as mentioned below
- Dataset Walkthrough
- Understanding Dataset Hierarchy
- Data Preprocessing
- Exploratory Data Analysis
- Data Visualization
- Making Recommendation System
This is a list of some of that things that you can do on this dataset. It's not definitely limited to the one that is mentioned there but a lot more other things can also be done.
Kaggle dataset identifier: amazon-sales-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('amazon-sales-dataset/amazon.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 1465 entries, 0 to 1464
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 product_id 1465 non-null object
1 product_name 1465 non-null object
2 category 1465 non-null object
3 discounted_price 1465 non-null object
4 actual_price 1465 non-null object
5 discount_percentage 1465 non-null object
6 rating 1465 non-null object
7 rating_count 1463 non-null object
8 about_product 1465 non-null object
9 user_id 1465 non-null object
10 user_name 1465 non-null object
11 review_id 1465 non-null object
12 review_title 1465 non-null object
13 review_content 1465 non-null object
14 img_link 1465 non-null object
15 product_link 1465 non-null object
dtypes: object(16)
memory usage: 183.2+ KB
<jupyter_text>Examples:
{
"product_id": "B07JW9H4J1",
"product_name": "Wayona Nylon Braided USB to Lightning Fast Charging and Data Sync Cable Compatible for iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini (3 FT Pack of 1, Grey)",
"category": "Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables",
"discounted_price": "\u20b9399",
"actual_price": "\u20b91,099",
"discount_percentage": "64%",
"rating": 4.2,
"rating_count": "24,269",
"about_product": "High Compatibility : Compatible With iPhone 12, 11, X/XsMax/Xr ,iPhone 8/8 Plus,iPhone 7/7 Plus,iPhone 6s/6s Plus,iPhone 6/6 Plus,iPhone 5/5s/5c/se,iPad Pro,iPad Air 1/2,iPad mini 1/2/3,iPod nano7,iPod touch and more apple devices.|Fast Charge&Data Sync : It can charge and sync...(truncated)",
"user_id": "AG3D6O4STAQKAY2UVGEUV46KN35Q,AHMY5CWJMMK5BJRBBSNLYT3ONILA,AHCTC6ULH4XB6YHDY6PCH2R772LQ,AGYHHIERNXKA6P5T7CZLXKVPT7IQ,AG4OGOFWXJZTQ2HKYIOCOY3KXF2Q,AENGU523SXMOS7JPDTW52PNNVWGQ,AEQJHCVTNINBS4FKTBGQRQTGTE5Q,AFC3FFC5PKFF5PMA52S3VCHOZ5FQ",
"user_name": "Manav,Adarsh gupta,Sundeep,S.Sayeed Ahmed,jaspreet singh,Khaja moin,Anand,S.ARUMUGAM",
"review_id": "R3HXWT0LRP0NMF,R2AJM3LFTLZHFO,R6AQJGUP6P86,R1KD19VHEDV0OR,R3C02RMYQMK6FC,R39GQRVBUZBWGY,R2K9EDOE15QIRJ,R3OI7YT648TL8I",
"review_title": "Satisfied,Charging is really fast,Value for money,Product review,Good quality,Good product,Good Product,As of now seems good",
"review_content": "Looks durable Charging is fine tooNo complains,Charging is really fast, good product.,Till now satisfied with the quality.,This is a good product . The charging speed is slower than the original iPhone cable,Good quality, would recommend,https://m.media-amazon.com/images/W/WEB...(truncated)",
"img_link": "https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/51UsScvHQNL._SX300_SY300_QL70_FMwebp_.jpg",
"product_link": "https://www.amazon.in/Wayona-Braided-WN3LG1-Syncing-Charging/dp/B07JW9H4J1/ref=sr_1_1?qid=1672909124&s=electronics&sr=1-1"
}
{
"product_id": "B098NS6PVG",
"product_name": "Ambrane Unbreakable 60W / 3A Fast Charging 1.5m Braided Type C Cable for Smartphones, Tablets, Laptops & other Type C devices, PD Technology, 480Mbps Data Sync, Quick Charge 3.0 (RCT15A, Black)",
"category": "Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables",
"discounted_price": "\u20b9199",
"actual_price": "\u20b9349",
"discount_percentage": "43%",
"rating": 4.0,
"rating_count": "43,994",
"about_product": "Compatible with all Type C enabled devices, be it an android smartphone (Mi, Samsung, Oppo, Vivo, Realme, OnePlus, etc), tablet, laptop (Macbook, Chromebook, etc)|Supports Quick Charging (2.0/3.0)|Unbreakable \u2013 Made of special braided outer with rugged interior bindings, i...(truncated)",
"user_id": "AECPFYFQVRUWC3KGNLJIOREFP5LQ,AGYYVPDD7YG7FYNBXNGXZJT525AQ,AHONIZU3ICIEHQIGQ6R2VFRSBXOQ,AFPHD2CRPDZMWMBL7WXRSVYWS5JA,AEZ346GX3HJ4O4XNRPHCNHXQURMQ,AEPSWFPNECKO34PUC7I56ITGXR6Q,AHWVEHR5DYLVFTO2KF3IZATFQSWQ,AH4QT33M55677I7ISQOAKEQWACYQ",
"user_name": "ArdKn,Nirbhay kumar,Sagar Viswanathan,Asp,Placeholder,BharanI,sonia,Niam",
"review_id": "RGIQEG07R9HS2,R1SMWZQ86XIN8U,R2J3Y1WL29GWDE,RYGGS0M09S3KY,R17KQRUTAN5DKS,R3AAQGS6HP2QUK,R1HDNOG6TO2CCA,R3PHKXYA5AFEOU",
"review_title": "A Good Braided Cable for Your Type C Device,Good quality product from ambrane,Super cable,As,Good quality,Good product,its good,Good quality for the price but one issue with my unit",
"review_content": "I ordered this cable to connect my phone to Android Auto of car. The cable is really strong and the connection ports are really well made. I already has a Micro USB cable from Ambrane and it's still in good shape. I connected my phone to the car using the cable and it got conn...(truncated)",
"img_link": "https://m.media-amazon.com/images/W/WEBP_402378-T2/images/I/31zOsqQOAOL._SY445_SX342_QL70_FMwebp_.jpg",
"product_link": "https://www.amazon.in/Ambrane-Unbreakable-Charging-Braided-Cable/dp/B098NS6PVG/ref=sr_1_2?qid=1672909124&s=electronics&sr=1-2"
}
{
"product_id": "B096MSW6CT",
"product_name": "Sounce Fast Phone Charging Cable & Data Sync USB Cable Compatible for iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini & iOS Devices",
"category": "Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables",
"discounted_price": "\u20b9199",
"actual_price": "\u20b91,899",
"discount_percentage": "90%",
"rating": 3.9,
"rating_count": "7,928",
"about_product": "\u3010 Fast Charger& Data Sync\u3011-With built-in safety proctections and four-core copper wires promote maximum signal quality and strength and enhance charging & data transfer speed with up to 480 mb/s transferring speed.|\u3010 Compatibility\u3011-Compatible with iPhone 13,...(truncated)",
"user_id": "AGU3BBQ2V2DDAMOAKGFAWDDQ6QHA,AESFLDV2PT363T2AQLWQOWZ4N3OA,AHTPQRIMGUD4BYR5YIHBH3CCGEFQ,AEUVWXYP5LT7PZLLZENEO2NODPBQ,AHC7MPW55DOO6WNCOQVA2VHOD26A,AFDI6FRPFBTNBG7BAEB7JDJSMKDQ,AFQKCEEEKXCOHTDG4WUN3XPPHJQQ,AHKUUFNMBZIDLSSPA4FEHIO2EC7Q",
"user_name": "Kunal,Himanshu,viswanath,sai niharka,saqib malik,Aashiq,Ramu Challa,Sanjay gupta",
"review_id": "R3J3EQQ9TZI5ZJ,R3E7WBGK7ID0KV,RWU79XKQ6I1QF,R25X4TBMPY91LX,R27OK7G99VK0TR,R207CYDCHJJTCJ,R3PCU8XMU173BT,R1IMONDOWRNU5V",
"review_title": "Good speed for earlier versions,Good Product,Working good,Good for the price,Good,Worth for money,Working nice,it's a really nice product",
"review_content": "Not quite durable and sturdy,https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/71rIggrbUCL._SY88.jpg,Working good,https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/61bKp9YO6wL._SY88.jpg,Product,Very nice product,Working well,It's a really nice product",
"img_link": "https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/31IvNJZnmdL._SY445_SX342_QL70_FMwebp_.jpg",
"product_link": "https://www.amazon.in/Sounce-iPhone-Charging-Compatible-Devices/dp/B096MSW6CT/ref=sr_1_3?qid=1672909124&s=electronics&sr=1-3"
}
{
"product_id": "B08HDJ86NZ",
"product_name": "boAt Deuce USB 300 2 in 1 Type-C & Micro USB Stress Resistant, Tangle-Free, Sturdy Cable with 3A Fast Charging & 480mbps Data Transmission, 10000+ Bends Lifespan and Extended 1.5m Length(Martian Red)",
"category": "Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables",
"discounted_price": "\u20b9329",
"actual_price": "\u20b9699",
"discount_percentage": "53%",
"rating": 4.2,
"rating_count": "94,363",
"about_product": "The boAt Deuce USB 300 2 in 1 cable is compatible with smartphones, tablets, PC peripherals, Bluetooth speakers, power banks and all other devices with Type-C as well as Micro USB port|It ensures 3A fast charging and data transmissions with rapid sync at 480 mbps|The premium Ny...(truncated)",
"user_id": "AEWAZDZZJLQUYVOVGBEUKSLXHQ5A,AG5HTSFRRE6NL3M5SGCUQBP7YSCA,AH725ST5NW2Y4JZPKUNTIJCUK2BA,AHV3TXIFCJPMS4D5JATCEUR266MQ,AGWIGDEMFIIUAOXYY2QATNBSUGHA,AFSTSLQUV4EVEXWKBOLEFHL2H5YQ,AGAKDNBHY2FKX7I4ACRGILU7QL7A,AFNWJUWJRHCC6HN52KMG5AKZY37Q",
"user_name": "Omkar dhale,JD,HEMALATHA,Ajwadh a.,amar singh chouhan,Ravi Siddan,Himanshu Goel,Udaykumar",
"review_id": "R3EEUZKKK9J36I,R3HJVYCLYOY554,REDECAZ7AMPQC,R1CLH2ULIVG5U3,R2DMKIBGFKBD6R,RC89B5IAJUTR5,R3B3DDON5FH8DS,R13WAEJDI5RS36",
"review_title": "Good product,Good one,Nice,Really nice product,Very first time change,Good,Fine product but could be better,Very nice it's charging like jet",
"review_content": "Good product,long wire,Charges good,Nice,I bought this cable for Rs.339 worthy product for this price, i tested it in various charger adapters 33w and 18w it supports fast charging as well.,Good,Ok,I had got this at good price on sale on Amazon and product is useful with warra...(truncated)",
"img_link": "https://m.media-amazon.com/images/I/41V5FtEWPkL._SX300_SY300_QL70_FMwebp_.jpg",
"product_link": "https://www.amazon.in/Deuce-300-Resistant-Tangle-Free-Transmission/dp/B08HDJ86NZ/ref=sr_1_4?qid=1672909124&s=electronics&sr=1-4"
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
import matplotlib.pyplot as plt
import seaborn as sns
dt = pd.read_csv("/kaggle/input/amazon-sales-dataset/amazon.csv")
dt.head(4)
# tail shows last 5 rows
dt.tail()
dt.info()
dt.describe()
# Non-numeric data
dt.isna().sum()
# Deleting the Non-numeric data on rating_count variable
dt.dropna(axis=0, how="any", inplace=True)
# Non-numeric data
dt.isna().sum()
# Removing, replacing and cleaning data
dt["category"] = dt.category.apply(lambda x: x.split("|")[0])
dt["discounted_price"] = dt.discounted_price.apply(
lambda x: x.replace("₹", "").replace(",", "")
).astype(float)
dt["actual_price"] = dt.actual_price.apply(
lambda x: x.replace("₹", "").replace(",", "")
).astype(float)
dt["rating_count"] = (
dt.rating_count.astype(str).apply(lambda x: x.replace(",", "")).astype(int)
)
dt["discount_percentage"] = dt.discount_percentage.apply(
lambda x: x.replace("%", "")
).astype(int)
dt.head(5)
# Correlation matrix
corrmat = dt.corr()
f, ax = plt.subplots(figsize=(4, 2))
sns.heatmap(corrmat, vmax=1, square=True)
dt.corr()
# Histogram
dt.hist(figsize=(8, 8), bins=10)
plt.tight_layout()
# List the names of the columns
cols = dt.columns.tolist()
print(cols)
print(len(cols))
# Unique values in category
dt["category"].unique()
# Unique values in discount_percentage
dt["discount_percentage"].unique()
# Unique values in product_name
dt["product_name"].unique()
# Analyzing sales share by category
plt.figure(figsize=(10, 5))
sns.histplot(data=dt, x="category", stat="percent")
plt.xticks(rotation=90)
# The analysis will continue with the categories: Computers&Accesories, Home&Kitchen, Electronics and Office Products
# sort the data by 'rating' in descending order
dt.sort_values(by="rating", ascending=False, inplace=True)
dt["rating"].value_counts()
dt.loc[dt["rating"] == "|"]
dt.drop(index=1279, inplace=True)
dt["rating"].value_counts().plot.bar()
# scatterplot of rating counts by category
sns.scatterplot(x="category", y="rating_count", data=dt, hue="rating_count")
plt.xticks(rotation=90)
plt.tight_layout()
# New data set
dt2 = dt.drop(
columns=[
"product_id",
"about_product",
"user_id",
"user_name",
"review_id",
"review_title",
"review_content",
"img_link",
]
)
# Sort the data by 'rating_count'
dt2.sort_values(by="rating_count", ascending=False, inplace=True)
# Delete duplicate items and keep the items with higher or latest 'rating_count'
dt2.drop_duplicates(
subset="product_name", keep="first", inplace=True, ignore_index=True
)
dt2.shape
# We can visualice the mean discount percentage by category :
new_catg = ("Electronics", "Home&Kitchen", "Computers&Accessories", "OfficeProducts")
ddf = dt2[dt2["category"].isin(new_catg)]
sns.displot(data=ddf, x="discount_percentage", col="category", kind="hist", kde=True)
plt.figure(figsize=(15, 5))
plt.xticks(rotation=90)
sns.stripplot(data=dt2, x="category", y="rating_count", jitter=0.3)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/103/129103334.ipynb
|
amazon-sales-dataset
|
karkavelrajaj
|
[{"Id": 129103334, "ScriptId": 35240882, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12683756, "CreationDate": "05/11/2023 03:23:13", "VersionNumber": 4.0, "Title": "Visual Analysis", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 112.0, "LinesInsertedFromPrevious": 19.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 93.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184862552, "KernelVersionId": 129103334, "SourceDatasetVersionId": 4862520}]
|
[{"Id": 4862520, "DatasetId": 2818963, "DatasourceVersionId": 4929374, "CreatorUserId": 9355447, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "01/17/2023 06:21:15", "VersionNumber": 1.0, "Title": "Amazon Sales Dataset", "Slug": "amazon-sales-dataset", "Subtitle": "This dataset is having the data of 1K+ Amazon Product's Ratings and Reviews", "Description": "This dataset is having the data of 1K+ Amazon Product's Ratings and Reviews as per their details listed on the official website of Amazon\n\n**Features**\n\n- product_id - Product ID\n- product_name - Name of the Product\n- category - Category of the Product\n- discounted_price - Discounted Price of the Product\n- actual_price - Actual Price of the Product\n- discount_percentage - Percentage of Discount for the Product\n- rating - Rating of the Product\n- rating_count - Number of people who voted for the Amazon rating\n- about_product - Description about the Product\n- user_id - ID of the user who wrote review for the Product\n- user_name - Name of the user who wrote review for the Product\n- review_id - ID of the user review\n- review_title - Short review\n- review_content - Long review\n- img_link - Image Link of the Product\n- product_link - Official Website Link of the Product\n\n**Inspiration**\n\nAmazon is an American Tech Multi-National Company whose business interests include E-commerce, where they buy and store the inventory, and take care of everything from shipping and pricing to customer service and returns. I've created this dataset so that people can play with this dataset and do a lot of things as mentioned below\n\n- Dataset Walkthrough\n- Understanding Dataset Hierarchy\n- Data Preprocessing\n- Exploratory Data Analysis\n- Data Visualization\n- Making Recommendation System\nThis is a list of some of that things that you can do on this dataset. It's not definitely limited to the one that is mentioned there but a lot more other things can also be done.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2818963, "CreatorUserId": 9355447, "OwnerUserId": 9355447.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 4862520.0, "CurrentDatasourceVersionId": 4929374.0, "ForumId": 2853848, "Type": 2, "CreationDate": "01/17/2023 06:21:15", "LastActivityDate": "01/17/2023", "TotalViews": 157282, "TotalDownloads": 32675, "TotalVotes": 298, "TotalKernels": 30}]
|
[{"Id": 9355447, "UserName": "karkavelrajaj", "DisplayName": "KARKAVELRAJA J", "RegisterDate": "01/09/2022", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
import matplotlib.pyplot as plt
import seaborn as sns
dt = pd.read_csv("/kaggle/input/amazon-sales-dataset/amazon.csv")
dt.head(4)
# tail shows last 5 rows
dt.tail()
dt.info()
dt.describe()
# Non-numeric data
dt.isna().sum()
# Deleting the Non-numeric data on rating_count variable
dt.dropna(axis=0, how="any", inplace=True)
# Non-numeric data
dt.isna().sum()
# Removing, replacing and cleaning data
dt["category"] = dt.category.apply(lambda x: x.split("|")[0])
dt["discounted_price"] = dt.discounted_price.apply(
lambda x: x.replace("₹", "").replace(",", "")
).astype(float)
dt["actual_price"] = dt.actual_price.apply(
lambda x: x.replace("₹", "").replace(",", "")
).astype(float)
dt["rating_count"] = (
dt.rating_count.astype(str).apply(lambda x: x.replace(",", "")).astype(int)
)
dt["discount_percentage"] = dt.discount_percentage.apply(
lambda x: x.replace("%", "")
).astype(int)
dt.head(5)
# Correlation matrix
corrmat = dt.corr()
f, ax = plt.subplots(figsize=(4, 2))
sns.heatmap(corrmat, vmax=1, square=True)
dt.corr()
# Histogram
dt.hist(figsize=(8, 8), bins=10)
plt.tight_layout()
# List the names of the columns
cols = dt.columns.tolist()
print(cols)
print(len(cols))
# Unique values in category
dt["category"].unique()
# Unique values in discount_percentage
dt["discount_percentage"].unique()
# Unique values in product_name
dt["product_name"].unique()
# Analyzing sales share by category
plt.figure(figsize=(10, 5))
sns.histplot(data=dt, x="category", stat="percent")
plt.xticks(rotation=90)
# The analysis will continue with the categories: Computers&Accesories, Home&Kitchen, Electronics and Office Products
# sort the data by 'rating' in descending order
dt.sort_values(by="rating", ascending=False, inplace=True)
dt["rating"].value_counts()
dt.loc[dt["rating"] == "|"]
dt.drop(index=1279, inplace=True)
dt["rating"].value_counts().plot.bar()
# scatterplot of rating counts by category
sns.scatterplot(x="category", y="rating_count", data=dt, hue="rating_count")
plt.xticks(rotation=90)
plt.tight_layout()
# New data set
dt2 = dt.drop(
columns=[
"product_id",
"about_product",
"user_id",
"user_name",
"review_id",
"review_title",
"review_content",
"img_link",
]
)
# Sort the data by 'rating_count'
dt2.sort_values(by="rating_count", ascending=False, inplace=True)
# Delete duplicate items and keep the items with higher or latest 'rating_count'
dt2.drop_duplicates(
subset="product_name", keep="first", inplace=True, ignore_index=True
)
dt2.shape
# We can visualice the mean discount percentage by category :
new_catg = ("Electronics", "Home&Kitchen", "Computers&Accessories", "OfficeProducts")
ddf = dt2[dt2["category"].isin(new_catg)]
sns.displot(data=ddf, x="discount_percentage", col="category", kind="hist", kde=True)
plt.figure(figsize=(15, 5))
plt.xticks(rotation=90)
sns.stripplot(data=dt2, x="category", y="rating_count", jitter=0.3)
|
[{"amazon-sales-dataset/amazon.csv": {"column_names": "[\"product_id\", \"product_name\", \"category\", \"discounted_price\", \"actual_price\", \"discount_percentage\", \"rating\", \"rating_count\", \"about_product\", \"user_id\", \"user_name\", \"review_id\", \"review_title\", \"review_content\", \"img_link\", \"product_link\"]", "column_data_types": "{\"product_id\": \"object\", \"product_name\": \"object\", \"category\": \"object\", \"discounted_price\": \"object\", \"actual_price\": \"object\", \"discount_percentage\": \"object\", \"rating\": \"object\", \"rating_count\": \"object\", \"about_product\": \"object\", \"user_id\": \"object\", \"user_name\": \"object\", \"review_id\": \"object\", \"review_title\": \"object\", \"review_content\": \"object\", \"img_link\": \"object\", \"product_link\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1465 entries, 0 to 1464\nData columns (total 16 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 product_id 1465 non-null object\n 1 product_name 1465 non-null object\n 2 category 1465 non-null object\n 3 discounted_price 1465 non-null object\n 4 actual_price 1465 non-null object\n 5 discount_percentage 1465 non-null object\n 6 rating 1465 non-null object\n 7 rating_count 1463 non-null object\n 8 about_product 1465 non-null object\n 9 user_id 1465 non-null object\n 10 user_name 1465 non-null object\n 11 review_id 1465 non-null object\n 12 review_title 1465 non-null object\n 13 review_content 1465 non-null object\n 14 img_link 1465 non-null object\n 15 product_link 1465 non-null object\ndtypes: object(16)\nmemory usage: 183.2+ KB\n", "summary": "{\"product_id\": {\"count\": 1465, \"unique\": 1351, \"top\": \"B07JW9H4J1\", \"freq\": 3}, \"product_name\": {\"count\": 1465, \"unique\": 1337, \"top\": \"Fire-Boltt Ninja Call Pro Plus 1.83\\\" Smart Watch with Bluetooth Calling, AI Voice Assistance, 100 Sports Modes IP67 Rating, 240*280 Pixel High Resolution\", \"freq\": 5}, \"category\": {\"count\": 1465, \"unique\": 211, \"top\": \"Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables\", \"freq\": 233}, \"discounted_price\": {\"count\": 1465, \"unique\": 550, \"top\": \"\\u20b9199\", \"freq\": 53}, \"actual_price\": {\"count\": 1465, \"unique\": 449, \"top\": \"\\u20b9999\", \"freq\": 120}, \"discount_percentage\": {\"count\": 1465, \"unique\": 92, \"top\": \"50%\", \"freq\": 56}, \"rating\": {\"count\": 1465, \"unique\": 28, \"top\": \"4.1\", \"freq\": 244}, \"rating_count\": {\"count\": 1463, \"unique\": 1143, \"top\": \"9,378\", \"freq\": 9}, \"about_product\": {\"count\": 1465, \"unique\": 1293, \"top\": \"[CHARGE & SYNC FUNCTION]- This cable comes with charging & Data sync function|[HIGH QUALITY MATERIAL]- TPE + Nylon Material to make sure that the life of the cable is enhanced significantly|[LONG CORD]- The Cable is extra thick 1.2 meter long, optimized for an easy use for your comfort at home or office|[MORE DURABLE]-This cable is unique interms of design and multi-use and is positioned to provide the best comfort and performance while using|[UNIVERSAL COMPATIBILITY]- Compatible with all devices like iPhone XS, X, XR, 8, 7, 6S, 6, 5S, iPad Pro, iPad mini and iPad Air\", \"freq\": 6}, \"user_id\": {\"count\": 1465, \"unique\": 1194, \"top\": \"AHIKJUDTVJ4T6DV6IUGFYZ5LXMPA,AE55KTFVNXYFD5FPYWP2OUPEYNPQ,AEBWA5I4QFCA3P3OBEPMELBGN4GQ,AHMGAC6QM62UXNEOCZIHLHSXPP2Q,AFHROSCGIXUPV3FYQ7H5QOD46Q7Q,AEAMIR3CMSA32IDEINSJKHRNANTA,AF355FTXYAKFH5NYPRTE7SL3WO3Q,AG5DWPD54QGSLWJ6QUFERLPNAX4Q\", \"freq\": 10}, \"user_name\": {\"count\": 1465, \"unique\": 1194, \"top\": \"$@|\\\\|TO$|-|,Sethu madhav,Akash Thakur,Burger Planet,Justice \\u2696\\ufe0f,indrajyoti d.,Aditya Kumar,E.C.GEORGE\", \"freq\": 10}, \"review_id\": {\"count\": 1465, \"unique\": 1194, \"top\": \"R3F4T5TRYPTMIG,R3DQIEC603E7AY,R1O4Z15FD40PV5,RDVX50PD4CTFE,R3H6WKG0TA5CGU,R3Q3L1KP5QWPV3,RU0LU2PAIIME,R20FTANBPFA653\", \"freq\": 10}, \"review_title\": {\"count\": 1465, \"unique\": 1194, \"top\": \"Worked on iPhone 7 and didn\\u2019t work on XR,Good one,Dull Physical Looks,Just Buy it,Go for it,About the product,Get charging cable at the price,Working well.\", \"freq\": 10}, \"review_content\": {\"count\": 1465, \"unique\": 1212, \"top\": \"I am not big on camera usage, personally. I was even mentally prepared for a bad camera, based on some reviews here. But I was pleasantly surprised that camera clicks good photos. They are not awesome, but they are decent photos that can even be shared.Now coming to my biggest grouse; heating issue. The phone started heating up while charging, but it was just a little and so I could have ignored it. But then it started heating up more and got me very concerned. I even ordered a replacement thinking I got a defective piece. But then, after further tests, I found that it is heating more when I download huge amounts of data, for example, when I restore data of my old phone, from back up. This is ok with me as, I don't perform huge data loads regularly, definitely not on phone. Then I tested by running tasks I usually perform such as checking office mails, attending office meeting on phone, watching a video from Amazon Prime, and so on. The phone did not heat up even a little. Personally, this is good for me.At this price range, this is a good phone. But if you are camera heavy user and expect to perform heavy downloads frequently, this phone may not for you. I am personally satisfied with this phone as it works for my type of usage. I will not go into plus points of this phone as they are covered by other reviews already. I am only attempting to clarify about how this phone can suit you (or not) in terms of camera and heating. I had many questions about these aspects before buying. Perhaps this review will help you make an informed decision to buy (or avoid). Cheers.,Display - BeautyCamera - decentPerformance - AmazingBattery - ok (in 5000mah u expect more tbh)Overall good phone...Also after 1day of use, i found some network connectivity issue in my jiosim, which I'm using right now in this phone, but I'll keep update this review after 1month of usage!,It's a decent mobile under this price but few things worried me , weight of the phone, too many procedure to change some settings, no screen casting. Apart from that it has good touch, a decent camera for day light , battery life is good.,I bought this smartphone for my mom. Samusung interface is very handful for easy use. Battery is superb, last whole day. Camera is mediocre but provide original colour pictures. All in all satisfied with this smartphone that i got in sale for 9499.,Unable to do video call within same service provider as in VOLTE within same service provider video call feature is available.,Product is fine. Nothing Fancy but for the budget it is a good phone.,BATTERY : more than enough for normal use Not sure in gamingCAMERA : good in this segment , can record videos in FHD 30fpsDISPLAY : since it's a LCD display the quality is a bit less , but goodV RAM : you can add upto 2gb of virtual ram but have to sacrifice your storage Space to use it OVERALL A GOOD BUDGET PHONE,Finger print is working speedy battery backup is good camera quality is also good\", \"freq\": 8}, \"img_link\": {\"count\": 1465, \"unique\": 1412, \"top\": \"https://m.media-amazon.com/images/I/413sCRKobNL._SX300_SY300_QL70_ML2_.jpg\", \"freq\": 3}, \"product_link\": {\"count\": 1465, \"unique\": 1465, \"top\": \"https://www.amazon.in/Wayona-Braided-WN3LG1-Syncing-Charging/dp/B07JW9H4J1/ref=sr_1_1?qid=1672909124&s=electronics&sr=1-1\", \"freq\": 1}}", "examples": "{\"product_id\":{\"0\":\"B07JW9H4J1\",\"1\":\"B098NS6PVG\",\"2\":\"B096MSW6CT\",\"3\":\"B08HDJ86NZ\"},\"product_name\":{\"0\":\"Wayona Nylon Braided USB to Lightning Fast Charging and Data Sync Cable Compatible for iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini (3 FT Pack of 1, Grey)\",\"1\":\"Ambrane Unbreakable 60W \\/ 3A Fast Charging 1.5m Braided Type C Cable for Smartphones, Tablets, Laptops & other Type C devices, PD Technology, 480Mbps Data Sync, Quick Charge 3.0 (RCT15A, Black)\",\"2\":\"Sounce Fast Phone Charging Cable & Data Sync USB Cable Compatible for iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini & iOS Devices\",\"3\":\"boAt Deuce USB 300 2 in 1 Type-C & Micro USB Stress Resistant, Tangle-Free, Sturdy Cable with 3A Fast Charging & 480mbps Data Transmission, 10000+ Bends Lifespan and Extended 1.5m Length(Martian Red)\"},\"category\":{\"0\":\"Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables\",\"1\":\"Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables\",\"2\":\"Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables\",\"3\":\"Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables\"},\"discounted_price\":{\"0\":\"\\u20b9399\",\"1\":\"\\u20b9199\",\"2\":\"\\u20b9199\",\"3\":\"\\u20b9329\"},\"actual_price\":{\"0\":\"\\u20b91,099\",\"1\":\"\\u20b9349\",\"2\":\"\\u20b91,899\",\"3\":\"\\u20b9699\"},\"discount_percentage\":{\"0\":\"64%\",\"1\":\"43%\",\"2\":\"90%\",\"3\":\"53%\"},\"rating\":{\"0\":\"4.2\",\"1\":\"4.0\",\"2\":\"3.9\",\"3\":\"4.2\"},\"rating_count\":{\"0\":\"24,269\",\"1\":\"43,994\",\"2\":\"7,928\",\"3\":\"94,363\"},\"about_product\":{\"0\":\"High Compatibility : Compatible With iPhone 12, 11, X\\/XsMax\\/Xr ,iPhone 8\\/8 Plus,iPhone 7\\/7 Plus,iPhone 6s\\/6s Plus,iPhone 6\\/6 Plus,iPhone 5\\/5s\\/5c\\/se,iPad Pro,iPad Air 1\\/2,iPad mini 1\\/2\\/3,iPod nano7,iPod touch and more apple devices.|Fast Charge&Data Sync : It can charge and sync simultaneously at a rapid speed, Compatible with any charging adaptor, multi-port charging station or power bank.|Durability : Durable nylon braided design with premium aluminum housing and toughened nylon fiber wound tightly around the cord lending it superior durability and adding a bit to its flexibility.|High Security Level : It is designed to fully protect your device from damaging excessive current.Copper core thick+Multilayer shielding, Anti-interference, Protective circuit equipment.|WARRANTY: 12 months warranty and friendly customer services, ensures the long-time enjoyment of your purchase. If you meet any question or problem, please don't hesitate to contact us.\",\"1\":\"Compatible with all Type C enabled devices, be it an android smartphone (Mi, Samsung, Oppo, Vivo, Realme, OnePlus, etc), tablet, laptop (Macbook, Chromebook, etc)|Supports Quick Charging (2.0\\/3.0)|Unbreakable \\u2013 Made of special braided outer with rugged interior bindings, it is ultra-durable cable that won\\u2019t be affected by daily rough usage|Ideal Length \\u2013 It has ideal length of 1.5 meters which is neither too short like your typical 1meter cable or too long like a 2meters cable|Supports maximum 3A fast charging and 480 Mbps data transfer speed|6 months manufacturer warranty from the date of purchase\",\"2\":\"\\u3010 Fast Charger& Data Sync\\u3011-With built-in safety proctections and four-core copper wires promote maximum signal quality and strength and enhance charging & data transfer speed with up to 480 mb\\/s transferring speed.|\\u3010 Compatibility\\u3011-Compatible with iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini & iOS devices.|\\u3010 Sturdy & Durable\\u3011-The jacket and enforced connector made of TPE and premium copper, are resistant to repeatedly bending and coiling.|\\u3010 Ultra High Quality\\u3011: According to the experimental results, the fishbone design can accept at least 20,000 bending and insertion tests for extra protection and durability. Upgraded 3D aluminum connector and exclusive laser welding technology, which to ensure the metal part won't break and also have a tighter connection which fits well even with a protective case on and will never loose connection.|\\u3010 Good After Sales Service\\u3011-Our friendly and reliable customer service will respond to you within 24 hours ! you can purchase with confidence,and every sale includes a 365-day worry-free Service to prove the importance we set on quality.\",\"3\":\"The boAt Deuce USB 300 2 in 1 cable is compatible with smartphones, tablets, PC peripherals, Bluetooth speakers, power banks and all other devices with Type-C as well as Micro USB port|It ensures 3A fast charging and data transmissions with rapid sync at 480 mbps|The premium Nylon braided skin makes it sturdy and invincible against external damage|Its Aluminium alloy shell housing makes it last longer with 10000+ Bends Lifespan with extended frame protection for strain relief|The resilient and flexible design offers a tangle free experience seamlessly|Deuce USB 300 cable offers a perfect 1.5 meters in length for smooth & hassle-free user experience|2 years warranty from the date of purchase\"},\"user_id\":{\"0\":\"AG3D6O4STAQKAY2UVGEUV46KN35Q,AHMY5CWJMMK5BJRBBSNLYT3ONILA,AHCTC6ULH4XB6YHDY6PCH2R772LQ,AGYHHIERNXKA6P5T7CZLXKVPT7IQ,AG4OGOFWXJZTQ2HKYIOCOY3KXF2Q,AENGU523SXMOS7JPDTW52PNNVWGQ,AEQJHCVTNINBS4FKTBGQRQTGTE5Q,AFC3FFC5PKFF5PMA52S3VCHOZ5FQ\",\"1\":\"AECPFYFQVRUWC3KGNLJIOREFP5LQ,AGYYVPDD7YG7FYNBXNGXZJT525AQ,AHONIZU3ICIEHQIGQ6R2VFRSBXOQ,AFPHD2CRPDZMWMBL7WXRSVYWS5JA,AEZ346GX3HJ4O4XNRPHCNHXQURMQ,AEPSWFPNECKO34PUC7I56ITGXR6Q,AHWVEHR5DYLVFTO2KF3IZATFQSWQ,AH4QT33M55677I7ISQOAKEQWACYQ\",\"2\":\"AGU3BBQ2V2DDAMOAKGFAWDDQ6QHA,AESFLDV2PT363T2AQLWQOWZ4N3OA,AHTPQRIMGUD4BYR5YIHBH3CCGEFQ,AEUVWXYP5LT7PZLLZENEO2NODPBQ,AHC7MPW55DOO6WNCOQVA2VHOD26A,AFDI6FRPFBTNBG7BAEB7JDJSMKDQ,AFQKCEEEKXCOHTDG4WUN3XPPHJQQ,AHKUUFNMBZIDLSSPA4FEHIO2EC7Q\",\"3\":\"AEWAZDZZJLQUYVOVGBEUKSLXHQ5A,AG5HTSFRRE6NL3M5SGCUQBP7YSCA,AH725ST5NW2Y4JZPKUNTIJCUK2BA,AHV3TXIFCJPMS4D5JATCEUR266MQ,AGWIGDEMFIIUAOXYY2QATNBSUGHA,AFSTSLQUV4EVEXWKBOLEFHL2H5YQ,AGAKDNBHY2FKX7I4ACRGILU7QL7A,AFNWJUWJRHCC6HN52KMG5AKZY37Q\"},\"user_name\":{\"0\":\"Manav,Adarsh gupta,Sundeep,S.Sayeed Ahmed,jaspreet singh,Khaja moin,Anand,S.ARUMUGAM\",\"1\":\"ArdKn,Nirbhay kumar,Sagar Viswanathan,Asp,Placeholder,BharanI,sonia,Niam\",\"2\":\"Kunal,Himanshu,viswanath,sai niharka,saqib malik,Aashiq,Ramu Challa,Sanjay gupta\",\"3\":\"Omkar dhale,JD,HEMALATHA,Ajwadh a.,amar singh chouhan,Ravi Siddan,Himanshu Goel,Udaykumar\"},\"review_id\":{\"0\":\"R3HXWT0LRP0NMF,R2AJM3LFTLZHFO,R6AQJGUP6P86,R1KD19VHEDV0OR,R3C02RMYQMK6FC,R39GQRVBUZBWGY,R2K9EDOE15QIRJ,R3OI7YT648TL8I\",\"1\":\"RGIQEG07R9HS2,R1SMWZQ86XIN8U,R2J3Y1WL29GWDE,RYGGS0M09S3KY,R17KQRUTAN5DKS,R3AAQGS6HP2QUK,R1HDNOG6TO2CCA,R3PHKXYA5AFEOU\",\"2\":\"R3J3EQQ9TZI5ZJ,R3E7WBGK7ID0KV,RWU79XKQ6I1QF,R25X4TBMPY91LX,R27OK7G99VK0TR,R207CYDCHJJTCJ,R3PCU8XMU173BT,R1IMONDOWRNU5V\",\"3\":\"R3EEUZKKK9J36I,R3HJVYCLYOY554,REDECAZ7AMPQC,R1CLH2ULIVG5U3,R2DMKIBGFKBD6R,RC89B5IAJUTR5,R3B3DDON5FH8DS,R13WAEJDI5RS36\"},\"review_title\":{\"0\":\"Satisfied,Charging is really fast,Value for money,Product review,Good quality,Good product,Good Product,As of now seems good\",\"1\":\"A Good Braided Cable for Your Type C Device,Good quality product from ambrane,Super cable,As,Good quality,Good product,its good,Good quality for the price but one issue with my unit\",\"2\":\"Good speed for earlier versions,Good Product,Working good,Good for the price,Good,Worth for money,Working nice,it's a really nice product\",\"3\":\"Good product,Good one,Nice,Really nice product,Very first time change,Good,Fine product but could be better,Very nice it's charging like jet\"},\"review_content\":{\"0\":\"Looks durable Charging is fine tooNo complains,Charging is really fast, good product.,Till now satisfied with the quality.,This is a good product . The charging speed is slower than the original iPhone cable,Good quality, would recommend,https:\\/\\/m.media-amazon.com\\/images\\/W\\/WEBP_402378-T1\\/images\\/I\\/81---F1ZgHL._SY88.jpg,Product had worked well till date and was having no issue.Cable is also sturdy enough...Have asked for replacement and company is doing the same...,Value for money\",\"1\":\"I ordered this cable to connect my phone to Android Auto of car. The cable is really strong and the connection ports are really well made. I already has a Micro USB cable from Ambrane and it's still in good shape. I connected my phone to the car using the cable and it got connected well and no issues. I also connected it to the charging port and yes it has Fast Charging support.,It quality is good at this price and the main thing is that i didn't ever thought that this cable will be so long it's good one and charging power is too good and also supports fast charging,Value for money, with extra length\\ud83d\\udc4d,Good, working fine,Product quality is good,Good,very good,Bought for my daughter's old phone.Brand new cable it was not charging, I already repacked and requested for replacement.I checked again, and there was some green colour paste\\/fungus inside the micro USB connector. I cleaned with an alcoholic and starts working again.Checked the ampere of charging speed got around 1400ma-1500ma - not bad, came with braided 1.5m long cable, pretty impressive for the price.Can't blame the manufacturer.But quality issues by the distributor, they might have stored in very humid place.\",\"2\":\"Not quite durable and sturdy,https:\\/\\/m.media-amazon.com\\/images\\/W\\/WEBP_402378-T1\\/images\\/I\\/71rIggrbUCL._SY88.jpg,Working good,https:\\/\\/m.media-amazon.com\\/images\\/W\\/WEBP_402378-T1\\/images\\/I\\/61bKp9YO6wL._SY88.jpg,Product,Very nice product,Working well,It's a really nice product\",\"3\":\"Good product,long wire,Charges good,Nice,I bought this cable for Rs.339 worthy product for this price, i tested it in various charger adapters 33w and 18w it supports fast charging as well.,Good,Ok,I had got this at good price on sale on Amazon and product is useful with warranty but for warranty you need to go very far not practical for such a cost and mine micro to type c connector stopped working after few days only.,I like this product\"},\"img_link\":{\"0\":\"https:\\/\\/m.media-amazon.com\\/images\\/W\\/WEBP_402378-T1\\/images\\/I\\/51UsScvHQNL._SX300_SY300_QL70_FMwebp_.jpg\",\"1\":\"https:\\/\\/m.media-amazon.com\\/images\\/W\\/WEBP_402378-T2\\/images\\/I\\/31zOsqQOAOL._SY445_SX342_QL70_FMwebp_.jpg\",\"2\":\"https:\\/\\/m.media-amazon.com\\/images\\/W\\/WEBP_402378-T1\\/images\\/I\\/31IvNJZnmdL._SY445_SX342_QL70_FMwebp_.jpg\",\"3\":\"https:\\/\\/m.media-amazon.com\\/images\\/I\\/41V5FtEWPkL._SX300_SY300_QL70_FMwebp_.jpg\"},\"product_link\":{\"0\":\"https:\\/\\/www.amazon.in\\/Wayona-Braided-WN3LG1-Syncing-Charging\\/dp\\/B07JW9H4J1\\/ref=sr_1_1?qid=1672909124&s=electronics&sr=1-1\",\"1\":\"https:\\/\\/www.amazon.in\\/Ambrane-Unbreakable-Charging-Braided-Cable\\/dp\\/B098NS6PVG\\/ref=sr_1_2?qid=1672909124&s=electronics&sr=1-2\",\"2\":\"https:\\/\\/www.amazon.in\\/Sounce-iPhone-Charging-Compatible-Devices\\/dp\\/B096MSW6CT\\/ref=sr_1_3?qid=1672909124&s=electronics&sr=1-3\",\"3\":\"https:\\/\\/www.amazon.in\\/Deuce-300-Resistant-Tangle-Free-Transmission\\/dp\\/B08HDJ86NZ\\/ref=sr_1_4?qid=1672909124&s=electronics&sr=1-4\"}}"}}]
| true | 1 |
<start_data_description><data_path>amazon-sales-dataset/amazon.csv:
<column_names>
['product_id', 'product_name', 'category', 'discounted_price', 'actual_price', 'discount_percentage', 'rating', 'rating_count', 'about_product', 'user_id', 'user_name', 'review_id', 'review_title', 'review_content', 'img_link', 'product_link']
<column_types>
{'product_id': 'object', 'product_name': 'object', 'category': 'object', 'discounted_price': 'object', 'actual_price': 'object', 'discount_percentage': 'object', 'rating': 'object', 'rating_count': 'object', 'about_product': 'object', 'user_id': 'object', 'user_name': 'object', 'review_id': 'object', 'review_title': 'object', 'review_content': 'object', 'img_link': 'object', 'product_link': 'object'}
<dataframe_Summary>
{'product_id': {'count': 1465, 'unique': 1351, 'top': 'B07JW9H4J1', 'freq': 3}, 'product_name': {'count': 1465, 'unique': 1337, 'top': 'Fire-Boltt Ninja Call Pro Plus 1.83" Smart Watch with Bluetooth Calling, AI Voice Assistance, 100 Sports Modes IP67 Rating, 240*280 Pixel High Resolution', 'freq': 5}, 'category': {'count': 1465, 'unique': 211, 'top': 'Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables', 'freq': 233}, 'discounted_price': {'count': 1465, 'unique': 550, 'top': '₹199', 'freq': 53}, 'actual_price': {'count': 1465, 'unique': 449, 'top': '₹999', 'freq': 120}, 'discount_percentage': {'count': 1465, 'unique': 92, 'top': '50%', 'freq': 56}, 'rating': {'count': 1465, 'unique': 28, 'top': '4.1', 'freq': 244}, 'rating_count': {'count': 1463, 'unique': 1143, 'top': '9,378', 'freq': 9}, 'about_product': {'count': 1465, 'unique': 1293, 'top': '[CHARGE & SYNC FUNCTION]- This cable comes with charging & Data sync function|[HIGH QUALITY MATERIAL]- TPE + Nylon Material to make sure that the life of the cable is enhanced significantly|[LONG CORD]- The Cable is extra thick 1.2 meter long, optimized for an easy use for your comfort at home or office|[MORE DURABLE]-This cable is unique interms of design and multi-use and is positioned to provide the best comfort and performance while using|[UNIVERSAL COMPATIBILITY]- Compatible with all devices like iPhone XS, X, XR, 8, 7, 6S, 6, 5S, iPad Pro, iPad mini and iPad Air', 'freq': 6}, 'user_id': {'count': 1465, 'unique': 1194, 'top': 'AHIKJUDTVJ4T6DV6IUGFYZ5LXMPA,AE55KTFVNXYFD5FPYWP2OUPEYNPQ,AEBWA5I4QFCA3P3OBEPMELBGN4GQ,AHMGAC6QM62UXNEOCZIHLHSXPP2Q,AFHROSCGIXUPV3FYQ7H5QOD46Q7Q,AEAMIR3CMSA32IDEINSJKHRNANTA,AF355FTXYAKFH5NYPRTE7SL3WO3Q,AG5DWPD54QGSLWJ6QUFERLPNAX4Q', 'freq': 10}, 'user_name': {'count': 1465, 'unique': 1194, 'top': '$@|\\|TO$|-|,Sethu madhav,Akash Thakur,Burger Planet,Justice ⚖️,indrajyoti d.,Aditya Kumar,E.C.GEORGE', 'freq': 10}, 'review_id': {'count': 1465, 'unique': 1194, 'top': 'R3F4T5TRYPTMIG,R3DQIEC603E7AY,R1O4Z15FD40PV5,RDVX50PD4CTFE,R3H6WKG0TA5CGU,R3Q3L1KP5QWPV3,RU0LU2PAIIME,R20FTANBPFA653', 'freq': 10}, 'review_title': {'count': 1465, 'unique': 1194, 'top': 'Worked on iPhone 7 and didn’t work on XR,Good one,Dull Physical Looks,Just Buy it,Go for it,About the product,Get charging cable at the price,Working well.', 'freq': 10}, 'review_content': {'count': 1465, 'unique': 1212, 'top': "I am not big on camera usage, personally. I was even mentally prepared for a bad camera, based on some reviews here. But I was pleasantly surprised that camera clicks good photos. They are not awesome, but they are decent photos that can even be shared.Now coming to my biggest grouse; heating issue. The phone started heating up while charging, but it was just a little and so I could have ignored it. But then it started heating up more and got me very concerned. I even ordered a replacement thinking I got a defective piece. But then, after further tests, I found that it is heating more when I download huge amounts of data, for example, when I restore data of my old phone, from back up. This is ok with me as, I don't perform huge data loads regularly, definitely not on phone. Then I tested by running tasks I usually perform such as checking office mails, attending office meeting on phone, watching a video from Amazon Prime, and so on. The phone did not heat up even a little. Personally, this is good for me.At this price range, this is a good phone. But if you are camera heavy user and expect to perform heavy downloads frequently, this phone may not for you. I am personally satisfied with this phone as it works for my type of usage. I will not go into plus points of this phone as they are covered by other reviews already. I am only attempting to clarify about how this phone can suit you (or not) in terms of camera and heating. I had many questions about these aspects before buying. Perhaps this review will help you make an informed decision to buy (or avoid). Cheers.,Display - BeautyCamera - decentPerformance - AmazingBattery - ok (in 5000mah u expect more tbh)Overall good phone...Also after 1day of use, i found some network connectivity issue in my jiosim, which I'm using right now in this phone, but I'll keep update this review after 1month of usage!,It's a decent mobile under this price but few things worried me , weight of the phone, too many procedure to change some settings, no screen casting. Apart from that it has good touch, a decent camera for day light , battery life is good.,I bought this smartphone for my mom. Samusung interface is very handful for easy use. Battery is superb, last whole day. Camera is mediocre but provide original colour pictures. All in all satisfied with this smartphone that i got in sale for 9499.,Unable to do video call within same service provider as in VOLTE within same service provider video call feature is available.,Product is fine. Nothing Fancy but for the budget it is a good phone.,BATTERY : more than enough for normal use Not sure in gamingCAMERA : good in this segment , can record videos in FHD 30fpsDISPLAY : since it's a LCD display the quality is a bit less , but goodV RAM : you can add upto 2gb of virtual ram but have to sacrifice your storage Space to use it OVERALL A GOOD BUDGET PHONE,Finger print is working speedy battery backup is good camera quality is also good", 'freq': 8}, 'img_link': {'count': 1465, 'unique': 1412, 'top': 'https://m.media-amazon.com/images/I/413sCRKobNL._SX300_SY300_QL70_ML2_.jpg', 'freq': 3}, 'product_link': {'count': 1465, 'unique': 1465, 'top': 'https://www.amazon.in/Wayona-Braided-WN3LG1-Syncing-Charging/dp/B07JW9H4J1/ref=sr_1_1?qid=1672909124&s=electronics&sr=1-1', 'freq': 1}}
<dataframe_info>
RangeIndex: 1465 entries, 0 to 1464
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 product_id 1465 non-null object
1 product_name 1465 non-null object
2 category 1465 non-null object
3 discounted_price 1465 non-null object
4 actual_price 1465 non-null object
5 discount_percentage 1465 non-null object
6 rating 1465 non-null object
7 rating_count 1463 non-null object
8 about_product 1465 non-null object
9 user_id 1465 non-null object
10 user_name 1465 non-null object
11 review_id 1465 non-null object
12 review_title 1465 non-null object
13 review_content 1465 non-null object
14 img_link 1465 non-null object
15 product_link 1465 non-null object
dtypes: object(16)
memory usage: 183.2+ KB
<some_examples>
{'product_id': {'0': 'B07JW9H4J1', '1': 'B098NS6PVG', '2': 'B096MSW6CT', '3': 'B08HDJ86NZ'}, 'product_name': {'0': 'Wayona Nylon Braided USB to Lightning Fast Charging and Data Sync Cable Compatible for iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini (3 FT Pack of 1, Grey)', '1': 'Ambrane Unbreakable 60W / 3A Fast Charging 1.5m Braided Type C Cable for Smartphones, Tablets, Laptops & other Type C devices, PD Technology, 480Mbps Data Sync, Quick Charge 3.0 (RCT15A, Black)', '2': 'Sounce Fast Phone Charging Cable & Data Sync USB Cable Compatible for iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini & iOS Devices', '3': 'boAt Deuce USB 300 2 in 1 Type-C & Micro USB Stress Resistant, Tangle-Free, Sturdy Cable with 3A Fast Charging & 480mbps Data Transmission, 10000+ Bends Lifespan and Extended 1.5m Length(Martian Red)'}, 'category': {'0': 'Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables', '1': 'Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables', '2': 'Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables', '3': 'Computers&Accessories|Accessories&Peripherals|Cables&Accessories|Cables|USBCables'}, 'discounted_price': {'0': '₹399', '1': '₹199', '2': '₹199', '3': '₹329'}, 'actual_price': {'0': '₹1,099', '1': '₹349', '2': '₹1,899', '3': '₹699'}, 'discount_percentage': {'0': '64%', '1': '43%', '2': '90%', '3': '53%'}, 'rating': {'0': '4.2', '1': '4.0', '2': '3.9', '3': '4.2'}, 'rating_count': {'0': '24,269', '1': '43,994', '2': '7,928', '3': '94,363'}, 'about_product': {'0': "High Compatibility : Compatible With iPhone 12, 11, X/XsMax/Xr ,iPhone 8/8 Plus,iPhone 7/7 Plus,iPhone 6s/6s Plus,iPhone 6/6 Plus,iPhone 5/5s/5c/se,iPad Pro,iPad Air 1/2,iPad mini 1/2/3,iPod nano7,iPod touch and more apple devices.|Fast Charge&Data Sync : It can charge and sync simultaneously at a rapid speed, Compatible with any charging adaptor, multi-port charging station or power bank.|Durability : Durable nylon braided design with premium aluminum housing and toughened nylon fiber wound tightly around the cord lending it superior durability and adding a bit to its flexibility.|High Security Level : It is designed to fully protect your device from damaging excessive current.Copper core thick+Multilayer shielding, Anti-interference, Protective circuit equipment.|WARRANTY: 12 months warranty and friendly customer services, ensures the long-time enjoyment of your purchase. If you meet any question or problem, please don't hesitate to contact us.", '1': 'Compatible with all Type C enabled devices, be it an android smartphone (Mi, Samsung, Oppo, Vivo, Realme, OnePlus, etc), tablet, laptop (Macbook, Chromebook, etc)|Supports Quick Charging (2.0/3.0)|Unbreakable – Made of special braided outer with rugged interior bindings, it is ultra-durable cable that won’t be affected by daily rough usage|Ideal Length – It has ideal length of 1.5 meters which is neither too short like your typical 1meter cable or too long like a 2meters cable|Supports maximum 3A fast charging and 480 Mbps data transfer speed|6 months manufacturer warranty from the date of purchase', '2': "【 Fast Charger& Data Sync】-With built-in safety proctections and four-core copper wires promote maximum signal quality and strength and enhance charging & data transfer speed with up to 480 mb/s transferring speed.|【 Compatibility】-Compatible with iPhone 13, 12,11, X, 8, 7, 6, 5, iPad Air, Pro, Mini & iOS devices.|【 Sturdy & Durable】-The jacket and enforced connector made of TPE and premium copper, are resistant to repeatedly bending and coiling.|【 Ultra High Quality】: According to the experimental results, the fishbone design can accept at least 20,000 bending and insertion tests for extra protection and durability. Upgraded 3D aluminum connector and exclusive laser welding technology, which to ensure the metal part won't break and also have a tighter connection which fits well even with a protective case on and will never loose connection.|【 Good After Sales Service】-Our friendly and reliable customer service will respond to you within 24 hours ! you can purchase with confidence,and every sale includes a 365-day worry-free Service to prove the importance we set on quality.", '3': 'The boAt Deuce USB 300 2 in 1 cable is compatible with smartphones, tablets, PC peripherals, Bluetooth speakers, power banks and all other devices with Type-C as well as Micro USB port|It ensures 3A fast charging and data transmissions with rapid sync at 480 mbps|The premium Nylon braided skin makes it sturdy and invincible against external damage|Its Aluminium alloy shell housing makes it last longer with 10000+ Bends Lifespan with extended frame protection for strain relief|The resilient and flexible design offers a tangle free experience seamlessly|Deuce USB 300 cable offers a perfect 1.5 meters in length for smooth & hassle-free user experience|2 years warranty from the date of purchase'}, 'user_id': {'0': 'AG3D6O4STAQKAY2UVGEUV46KN35Q,AHMY5CWJMMK5BJRBBSNLYT3ONILA,AHCTC6ULH4XB6YHDY6PCH2R772LQ,AGYHHIERNXKA6P5T7CZLXKVPT7IQ,AG4OGOFWXJZTQ2HKYIOCOY3KXF2Q,AENGU523SXMOS7JPDTW52PNNVWGQ,AEQJHCVTNINBS4FKTBGQRQTGTE5Q,AFC3FFC5PKFF5PMA52S3VCHOZ5FQ', '1': 'AECPFYFQVRUWC3KGNLJIOREFP5LQ,AGYYVPDD7YG7FYNBXNGXZJT525AQ,AHONIZU3ICIEHQIGQ6R2VFRSBXOQ,AFPHD2CRPDZMWMBL7WXRSVYWS5JA,AEZ346GX3HJ4O4XNRPHCNHXQURMQ,AEPSWFPNECKO34PUC7I56ITGXR6Q,AHWVEHR5DYLVFTO2KF3IZATFQSWQ,AH4QT33M55677I7ISQOAKEQWACYQ', '2': 'AGU3BBQ2V2DDAMOAKGFAWDDQ6QHA,AESFLDV2PT363T2AQLWQOWZ4N3OA,AHTPQRIMGUD4BYR5YIHBH3CCGEFQ,AEUVWXYP5LT7PZLLZENEO2NODPBQ,AHC7MPW55DOO6WNCOQVA2VHOD26A,AFDI6FRPFBTNBG7BAEB7JDJSMKDQ,AFQKCEEEKXCOHTDG4WUN3XPPHJQQ,AHKUUFNMBZIDLSSPA4FEHIO2EC7Q', '3': 'AEWAZDZZJLQUYVOVGBEUKSLXHQ5A,AG5HTSFRRE6NL3M5SGCUQBP7YSCA,AH725ST5NW2Y4JZPKUNTIJCUK2BA,AHV3TXIFCJPMS4D5JATCEUR266MQ,AGWIGDEMFIIUAOXYY2QATNBSUGHA,AFSTSLQUV4EVEXWKBOLEFHL2H5YQ,AGAKDNBHY2FKX7I4ACRGILU7QL7A,AFNWJUWJRHCC6HN52KMG5AKZY37Q'}, 'user_name': {'0': 'Manav,Adarsh gupta,Sundeep,S.Sayeed Ahmed,jaspreet singh,Khaja moin,Anand,S.ARUMUGAM', '1': 'ArdKn,Nirbhay kumar,Sagar Viswanathan,Asp,Placeholder,BharanI,sonia,Niam', '2': 'Kunal,Himanshu,viswanath,sai niharka,saqib malik,Aashiq,Ramu Challa,Sanjay gupta', '3': 'Omkar dhale,JD,HEMALATHA,Ajwadh a.,amar singh chouhan,Ravi Siddan,Himanshu Goel,Udaykumar'}, 'review_id': {'0': 'R3HXWT0LRP0NMF,R2AJM3LFTLZHFO,R6AQJGUP6P86,R1KD19VHEDV0OR,R3C02RMYQMK6FC,R39GQRVBUZBWGY,R2K9EDOE15QIRJ,R3OI7YT648TL8I', '1': 'RGIQEG07R9HS2,R1SMWZQ86XIN8U,R2J3Y1WL29GWDE,RYGGS0M09S3KY,R17KQRUTAN5DKS,R3AAQGS6HP2QUK,R1HDNOG6TO2CCA,R3PHKXYA5AFEOU', '2': 'R3J3EQQ9TZI5ZJ,R3E7WBGK7ID0KV,RWU79XKQ6I1QF,R25X4TBMPY91LX,R27OK7G99VK0TR,R207CYDCHJJTCJ,R3PCU8XMU173BT,R1IMONDOWRNU5V', '3': 'R3EEUZKKK9J36I,R3HJVYCLYOY554,REDECAZ7AMPQC,R1CLH2ULIVG5U3,R2DMKIBGFKBD6R,RC89B5IAJUTR5,R3B3DDON5FH8DS,R13WAEJDI5RS36'}, 'review_title': {'0': 'Satisfied,Charging is really fast,Value for money,Product review,Good quality,Good product,Good Product,As of now seems good', '1': 'A Good Braided Cable for Your Type C Device,Good quality product from ambrane,Super cable,As,Good quality,Good product,its good,Good quality for the price but one issue with my unit', '2': "Good speed for earlier versions,Good Product,Working good,Good for the price,Good,Worth for money,Working nice,it's a really nice product", '3': "Good product,Good one,Nice,Really nice product,Very first time change,Good,Fine product but could be better,Very nice it's charging like jet"}, 'review_content': {'0': 'Looks durable Charging is fine tooNo complains,Charging is really fast, good product.,Till now satisfied with the quality.,This is a good product . The charging speed is slower than the original iPhone cable,Good quality, would recommend,https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/81---F1ZgHL._SY88.jpg,Product had worked well till date and was having no issue.Cable is also sturdy enough...Have asked for replacement and company is doing the same...,Value for money', '1': "I ordered this cable to connect my phone to Android Auto of car. The cable is really strong and the connection ports are really well made. I already has a Micro USB cable from Ambrane and it's still in good shape. I connected my phone to the car using the cable and it got connected well and no issues. I also connected it to the charging port and yes it has Fast Charging support.,It quality is good at this price and the main thing is that i didn't ever thought that this cable will be so long it's good one and charging power is too good and also supports fast charging,Value for money, with extra length👍,Good, working fine,Product quality is good,Good,very good,Bought for my daughter's old phone.Brand new cable it was not charging, I already repacked and requested for replacement.I checked again, and there was some green colour paste/fungus inside the micro USB connector. I cleaned with an alcoholic and starts working again.Checked the ampere of charging speed got around 1400ma-1500ma - not bad, came with braided 1.5m long cable, pretty impressive for the price.Can't blame the manufacturer.But quality issues by the distributor, they might have stored in very humid place.", '2': "Not quite durable and sturdy,https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/71rIggrbUCL._SY88.jpg,Working good,https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/61bKp9YO6wL._SY88.jpg,Product,Very nice product,Working well,It's a really nice product", '3': 'Good product,long wire,Charges good,Nice,I bought this cable for Rs.339 worthy product for this price, i tested it in various charger adapters 33w and 18w it supports fast charging as well.,Good,Ok,I had got this at good price on sale on Amazon and product is useful with warranty but for warranty you need to go very far not practical for such a cost and mine micro to type c connector stopped working after few days only.,I like this product'}, 'img_link': {'0': 'https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/51UsScvHQNL._SX300_SY300_QL70_FMwebp_.jpg', '1': 'https://m.media-amazon.com/images/W/WEBP_402378-T2/images/I/31zOsqQOAOL._SY445_SX342_QL70_FMwebp_.jpg', '2': 'https://m.media-amazon.com/images/W/WEBP_402378-T1/images/I/31IvNJZnmdL._SY445_SX342_QL70_FMwebp_.jpg', '3': 'https://m.media-amazon.com/images/I/41V5FtEWPkL._SX300_SY300_QL70_FMwebp_.jpg'}, 'product_link': {'0': 'https://www.amazon.in/Wayona-Braided-WN3LG1-Syncing-Charging/dp/B07JW9H4J1/ref=sr_1_1?qid=1672909124&s=electronics&sr=1-1', '1': 'https://www.amazon.in/Ambrane-Unbreakable-Charging-Braided-Cable/dp/B098NS6PVG/ref=sr_1_2?qid=1672909124&s=electronics&sr=1-2', '2': 'https://www.amazon.in/Sounce-iPhone-Charging-Compatible-Devices/dp/B096MSW6CT/ref=sr_1_3?qid=1672909124&s=electronics&sr=1-3', '3': 'https://www.amazon.in/Deuce-300-Resistant-Tangle-Free-Transmission/dp/B08HDJ86NZ/ref=sr_1_4?qid=1672909124&s=electronics&sr=1-4'}}
<end_description>
| 1,039 | 0 | 5,309 | 1,039 |
129174592
|
<jupyter_start><jupyter_text>Historical Weather Data for Indian Cities
### Context
The dataset was created by keeping in mind the necessity of such historical weather data in the community. The datasets for top 8 Indian cities as per the population.
### Content
The dataset was used with the help of the worldweatheronline.com API and the wwo_hist package. The datasets contain hourly weather data from 01-01-2009 to 01-01-2020. The data of each city is for more than 10 years. This data can be used to visualize the change in data due to global warming or can be used to predict the weather for upcoming days, weeks, months, seasons, etc.
Note : The data was extracted with the help of worldweatheronline.com API and I can't guarantee about the accuracy of the data.
Kaggle dataset identifier: historical-weather-data-for-indian-cities
<jupyter_script># **Enhancing Weather
# Predictions through
# Machine Learning and Data
# Optimization in Python**
# Team Members
# 1. Vansita Soni (21STUJPCS0025)
# 2. Rajendra Saini
# 3. Aditya Raj Singh
# 4. Sanjay Sharma
# 5. Khushi Kumari
# 6. Kartikeyan Ajmera
# # Importing Needed Packages
# The first step is to import the required packages, such as warnings, numpy, pandas, matplotlib.pyplot, sklearn, LinearRegression, and preprocessing. These packages will be used later in the code.
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
# # Reading CSV file as weather_df and making date_time column as index of dataframe
# The second step is to read the CSV file 'delhi.csv' and store it as the dataframe 'weather_df'. Here, the 'parse_dates' parameter is used to convert the 'date_time' column into a datetime format and 'index_col' parameter is used to set the 'date_time' column as the index of the dataframe.
weather_df = pd.read_csv(
"/kaggle/input/historical-weather-data-for-indian-cities/delhi.csv",
parse_dates=["date_time"],
index_col="date_time",
)
weather_df.tail(5)
# # Checking columns in our dataframe
# The 'columns' parameter is used to check the column names of the dataframe.
weather_df.columns
# ## Now shape
# The 'shape' parameter is used to check the dimensions of the dataframe, i.e., number of rows and columns.
weather_df.shape
# **Describing the dataframe:**
# The 'describe' parameter is used to display the summary statistics of the dataframe.
weather_df.describe()
# # Checking is there any null values in dataset
# The 'isnull' and 'any' parameters are used to check if there are any null values in the dataframe.
weather_df.isnull().any()
# ### Now lets separate the feature (i.e. temperature) to be predicted from the rest of the featured. weather_x stores the rest of the dataset while weather_y has temperature column.
# Here, a new dataframe 'weather_df_num' is created with only the features that are required for the prediction. The 'loc' parameter is used to select the required columns, i.e., 'maxtempC', 'mintempC', 'cloudcover', 'humidity', 'tempC', 'sunHour', 'HeatIndexC', 'precipMM', 'pressure', and 'windspeedKmph'. The 'head' parameter is used to display the first five rows of the new dataframe.
weather_df_num = weather_df.loc[
:,
[
"maxtempC",
"mintempC",
"cloudcover",
"humidity",
"tempC",
"sunHour",
"HeatIndexC",
"precipMM",
"pressure",
"windspeedKmph",
],
]
weather_df_num.head()
# # Shape of new dataframe
# The 'shape' parameter is used to display the dimensions of the new dataframe.
weather_df_num.shape
# # Columns in new dataframe
# The 'columns' parameter is used to display the column names of the new dataframe.
weather_df_num.columns
# ## Ploting all the column values
# The 'plot' parameter is used to display the line plots of all the columns in the dataframe
weather_df_num.plot(subplots=True, figsize=(25, 20))
# # Ploting all the column values for 1 year
# The 'resample' parameter is used to resample the data by day and 'fillna' parameter is used to fill any missing values with the previous value. The resulting dataframe is plotted using the 'plot' parameter.
weather_df_num["2019":"2020"].resample("D").fillna(method="pad").plot(
subplots=True, figsize=(25, 20)
)
weather_df_num.hist(bins=10, figsize=(15, 15))
weth = weather_df_num["2019":"2020"]
weth.head()
weather_y = weather_df_num.pop("tempC")
weather_x = weather_df_num
# ### Now our dataset is prepared and it is ready to be fed to the model for training.it’s time to split the dataset into training and testing.
train_X, test_X, train_y, test_y = train_test_split(
weather_x, weather_y, test_size=0.2, random_state=4
)
train_X.shape
train_y.shape
# ### train_x has all the features except temperature and train_y has the corresponding temperature for those features. in supervised machine learning we first feed the model with input and associated output and then we check with a new input.
train_y.head()
# This code snippet is using three different machine learning models to predict the temperature based on given features. Let's go through the code and understand each line and its working for all three models.
# Firstly, the code imports the required libraries for the implementation of the model. The sklearn library is used to import the three different machine learning models - RandomForestRegressor, DecisionTreeRegressor, and LinearRegression. The numpy and pandas libraries are used for handling data and numerical computations. The matplotlib library is used for data visualization purposes.
# ```
# from sklearn.ensemble import RandomForestRegressor
# from sklearn.tree import DecisionTreeRegressor
# from sklearn.linear_model import LinearRegression
# import numpy as np
# import pandas as pd
# import matplotlib.pyplot as plt
# ```
# Next, the code instantiates the RandomForestRegressor model and sets the parameters such as max_depth, random_state, and n_estimators to tune the model's performance. The fit method is called on the training data to train the model.
# ```
# regr = RandomForestRegressor(max_depth=90, random_state=0, n_estimators=100)
# regr.fit(train_X, train_y)
# ```
# Similarly, the DecisionTreeRegressor model is instantiated, and the fit method is called on the training data to train the model.
# ```
# regressor = DecisionTreeRegressor(random_state=0)
# regressor.fit(train_X, train_y)
# ```
# Finally, the LinearRegression model is instantiated, and the fit method is called on the training data to train the model.
# ```
# model = LinearRegression()
# model.fit(train_X, train_y)
# ```
# After training the models, the code predicts the temperature based on the test data using each of the three models. The predict method is called on the trained models to predict the temperature.
# ```
# prediction3 = regr.predict(test_X)
# prediction2 = regressor.predict(test_X)
# prediction = model.predict(test_X)
# ```
# The code then calculates the mean absolute error for each of the three models to evaluate the models' performance.
# ```
# np.mean(np.absolute(prediction3 - test_y))
# np.mean(np.absolute(prediction2 - test_y))
# np.mean(np.absolute(prediction - test_y))
# ```
# Similarly, the code calculates the variance score for each of the three models to evaluate the models' performance.
# ```
# regr.score(test_X, test_y)
# regressor.score(test_X, test_y)
# model.score(test_X, test_y)
# ```
# The code then rounds the predicted values to two decimal places and creates a dataframe to display the actual and predicted temperature values for each of the three models.
# ```
# for i in range(len(prediction3)):
# prediction3[i] = round(prediction3[i], 2)
# pd.DataFrame({'Actual': test_y, 'Prediction': prediction3, 'diff': (test_y - prediction3)})
# for i in range(len(prediction2)):
# prediction2[i] = round(prediction2[i], 2)
# pd.DataFrame({'Actual': test_y, 'Prediction': prediction2, 'diff': (test_y - prediction2)})
# for i in range(len(prediction)):
# prediction[i] = round(prediction[i], 2)
# pd.DataFrame({'Actual': test_y, 'Prediction': prediction, 'diff': (test_y - prediction)})
# ```
# Lastly, the code uses the matplotlib library to visualize the scatter plot for three features - minimum temperature, heat index, and pressure.
# ```
# plt.scatter(weth.mintempC, weth.tempC)
# plt.xlabel("Minimum Temperature")
# plt.ylabel("Temperature")
# plt.show()
# plt.scatter(weth.HeatIndexC, weth.tempC)
# plt.xlabel("Heat Index")
# plt.ylabel("Temperature")
# plt.show()
# plt.scatter(weth.pressure, weth.tempC)
# plt.xlabel("Minimum Temperature")
# plt.ylabel("Temperature")
# plt.show()
# # Multiple Linear Regression
# the code uses the matplotlib library to visualize the scatter plot for three features - minimum temperature, heat index, and pressure.
# # How Multiple regression model algo. work
# In a multiple regression model, **the predicted output variable (often denoted as Y)** is a **linear function of the input variables (often denoted as X1, X2, X3, ..., Xp)**, with an **additional constant term (often denoted as β0)** included as an intercept. The equation for the model can be written as:
# **Y = β0 + β1X1 + β2X2 + β3X3 + ... + βpXp + ε**
# where:- **Y** is the predicted output variable (often called the dependent variable or response variable).- **β0** is the intercept, which represents the predicted value of Y when all input variables are zero.- **β1, β2, β3, ..., βp** are the coefficients or weights assigned to each input variable, which represent the change in the predicted value of Y associated with a one-unit change in the corresponding input variable.- **X1, X2, X3, ..., Xp** are the input variables (often called the independent variables or predictors).- ε is the error term, which represents the random variability or noise in the relationship between the input variables and the output variable.
# To estimate the values of the coefficients **β0, β1, β2, β3, ..., βp**, we use a method called least squares regression, which minimizes the sum of squared errors between the predicted and actual output values in the training data. This involves solving a system of linear equations to find the values of the coefficients that minimize the residual sum of squares (RSS).
# Once the coefficients are estimated, we can use the model to make predictions on new data by plugging in the input values and solving for the predicted output value using the above equation.
plt.scatter(weth.mintempC, weth.tempC)
plt.xlabel("Minimum Temperature")
plt.ylabel("Temperature")
plt.show()
plt.scatter(weth.HeatIndexC, weth.tempC)
plt.xlabel("Heat Index")
plt.ylabel("Temperature")
plt.show()
plt.scatter(weth.pressure, weth.tempC)
plt.xlabel("Minimum Temperature")
plt.ylabel("Temperature")
plt.show()
plt.scatter(weth.mintempC, weth.tempC)
plt.xlabel("Minimum Temperature")
plt.ylabel("Temperature")
plt.show()
model = LinearRegression()
model.fit(train_X, train_y)
prediction = model.predict(test_X)
# calculating error
np.mean(np.absolute(prediction - test_y))
print("Variance score: %.2f" % model.score(test_X, test_y))
import plotly.graph_objs as go
for i in range(len(prediction)):
prediction[i] = round(prediction[i], 2)
results = pd.DataFrame(
{"Actual": test_y, "Prediction": prediction, "Difference": (test_y - prediction)}
)
fig = go.Figure(
data=[
go.Table(
header=dict(values=list(results.columns)),
cells=dict(values=[results.Actual, results.Prediction, results.Difference]),
)
]
)
fig.show()
# # Decision Tree Regression
# **Let's denote the input variables as X = [X1, X2, ..., Xp]** and the **output variable as Y**. A decision tree can be represented by a set of binary decision rules that partition the input space into non-overlapping regions. For each region, we assign a constant value that represents the predicted output value for all input values in that region.
# The decision rules can be represented as a series of if-then statements, where each statement tests the value of one input variable and branches the tree accordingly. For example, a simple decision tree for a single input variable X1 might have the following structure:
# **if X1 <= c1 then Y = y1**
# **else Y = y2**
# where c1 is a constant threshold value for X1, and y1 and y2 are constant values representing the predicted output value for X1 c1, respectively.
# To build a decision tree regression model, we use an algorithm to recursively partition the input space into smaller regions based on the input variables that are most predictive of the output variable. The algorithm selects the best split point for each input variable based on some criterion, such as the reduction in mean squared error or the increase in R-squared value. The tree is grown until a stopping criterion is met, such as reaching a minimum node size or a maximum tree depth.
# How Decition tree regression model algo. work
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state=0)
regressor.fit(train_X, train_y)
prediction2 = regressor.predict(test_X)
np.mean(np.absolute(prediction2 - test_y))
print("Variance score: %.2f" % regressor.score(test_X, test_y))
import plotly.graph_objs as go
for i in range(len(prediction2)):
prediction[i] = round(prediction2[i], 2)
results = pd.DataFrame(
{"Actual": test_y, "Prediction": prediction2, "Difference": (test_y - prediction2)}
)
fig = go.Figure(
data=[
go.Table(
header=dict(values=list(results.columns)),
cells=dict(values=[results.Actual, results.Prediction, results.Difference]),
)
]
)
fig.show()
# # Random Forest Regression
# Let's denote the input variables as X = [X1, X2, ..., Xp] and the output variable as Y. To build a random forest regression model, we first create a set of N decision trees T1, T2, ..., TN, where each tree is built using a bootstrap sample of the training data and a random subset of the input variables. The trees are built independently of each other, and the bootstrap sampling ensures that each tree sees a slightly different subset of the training data.
# For each tree Ti, we use the same algorithm as in decision tree regression to recursively partition the input space into smaller regions based on the input variables that are most predictive of the output variable. The tree is grown until a stopping criterion is met, such as reaching a minimum node size or a maximum tree depth.
# To make a prediction on new data, we pass the data through each of the N decision trees, and obtain a prediction for each tree. The predictions are then aggregated using some aggregation rule, such as taking the mean or median of the predictions. This final prediction represents the output of the random forest regression model.
# The aggregation rule can be chosen based on the properties of the data and the goals of the analysis. For example, taking the mean of the predictions tends to produce smoother predictions that are less sensitive to outliers, while taking the median tends to be more robust to extreme values.
# How Random forest tree regression model algo. work
from sklearn.ensemble import RandomForestRegressor
regr = RandomForestRegressor(max_depth=90, random_state=0, n_estimators=100)
regr.fit(train_X, train_y)
prediction3 = regr.predict(test_X)
np.mean(np.absolute(prediction3 - test_y))
print("Variance score: %.2f" % regr.score(test_X, test_y))
import plotly.graph_objs as go
for i in range(len(prediction3)):
prediction[i] = round(prediction3[i], 2)
results = pd.DataFrame(
{"Actual": test_y, "Prediction": prediction3, "Difference": (test_y - prediction3)}
)
fig = go.Figure(
data=[
go.Table(
header=dict(values=list(results.columns)),
cells=dict(values=[results.Actual, results.Prediction, results.Difference]),
)
]
)
fig.show()
from sklearn.metrics import r2_score
# # Calculating R2-score for Multiple Linear Regression
print("Mean absolute error: %.2f" % np.mean(np.absolute(prediction - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((prediction - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y, prediction))
# # Calculating R2-score for Decision Tree Regression
print("Mean absolute error: %.2f" % np.mean(np.absolute(prediction2 - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((prediction2 - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y, prediction2))
# # Calculating R2-score for Random Forest Regression
from sklearn.metrics import r2_score
print("Mean absolute error: %.2f" % np.mean(np.absolute(prediction3 - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((prediction3 - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y, prediction3))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/174/129174592.ipynb
|
historical-weather-data-for-indian-cities
|
hiteshsoneji
|
[{"Id": 129174592, "ScriptId": 37423813, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14462421, "CreationDate": "05/11/2023 14:38:38", "VersionNumber": 5.0, "Title": "WF-COA", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 327.0, "LinesInsertedFromPrevious": 29.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 298.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184989043, "KernelVersionId": 129174592, "SourceDatasetVersionId": 1129180}]
|
[{"Id": 1129180, "DatasetId": 635203, "DatasourceVersionId": 1159657, "CreatorUserId": 4267922, "LicenseName": "Other (specified in description)", "CreationDate": "05/04/2020 12:43:21", "VersionNumber": 1.0, "Title": "Historical Weather Data for Indian Cities", "Slug": "historical-weather-data-for-indian-cities", "Subtitle": "Historical weather data for top 8 indian cities per population", "Description": "### Context\n\nThe dataset was created by keeping in mind the necessity of such historical weather data in the community. The datasets for top 8 Indian cities as per the population. \n\n\n### Content\n\nThe dataset was used with the help of the worldweatheronline.com API and the wwo_hist package. The datasets contain hourly weather data from 01-01-2009 to 01-01-2020. The data of each city is for more than 10 years. This data can be used to visualize the change in data due to global warming or can be used to predict the weather for upcoming days, weeks, months, seasons, etc.\nNote : The data was extracted with the help of worldweatheronline.com API and I can't guarantee about the accuracy of the data.\n\n\n### Acknowledgements\n\nThe data is owned by worldweatheronline.com and is extracted with the help of their API. \n\n\n### Inspiration\n\nThe main target of this dataset can be used to predict weather for the next day or week with huge amounts of data provided in the dataset. Furthermore, this data can also be used to make visualization which would help to understand the impact of global warming over the various aspects of the weather like precipitation, humidity, temperature, etc.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 635203, "CreatorUserId": 4267922, "OwnerUserId": 4267922.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1129180.0, "CurrentDatasourceVersionId": 1159657.0, "ForumId": 649466, "Type": 2, "CreationDate": "05/04/2020 12:43:21", "LastActivityDate": "05/04/2020", "TotalViews": 21168, "TotalDownloads": 2732, "TotalVotes": 39, "TotalKernels": 3}]
|
[{"Id": 4267922, "UserName": "hiteshsoneji", "DisplayName": "Hitesh Soneji", "RegisterDate": "12/30/2019", "PerformanceTier": 1}]
|
# **Enhancing Weather
# Predictions through
# Machine Learning and Data
# Optimization in Python**
# Team Members
# 1. Vansita Soni (21STUJPCS0025)
# 2. Rajendra Saini
# 3. Aditya Raj Singh
# 4. Sanjay Sharma
# 5. Khushi Kumari
# 6. Kartikeyan Ajmera
# # Importing Needed Packages
# The first step is to import the required packages, such as warnings, numpy, pandas, matplotlib.pyplot, sklearn, LinearRegression, and preprocessing. These packages will be used later in the code.
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
# # Reading CSV file as weather_df and making date_time column as index of dataframe
# The second step is to read the CSV file 'delhi.csv' and store it as the dataframe 'weather_df'. Here, the 'parse_dates' parameter is used to convert the 'date_time' column into a datetime format and 'index_col' parameter is used to set the 'date_time' column as the index of the dataframe.
weather_df = pd.read_csv(
"/kaggle/input/historical-weather-data-for-indian-cities/delhi.csv",
parse_dates=["date_time"],
index_col="date_time",
)
weather_df.tail(5)
# # Checking columns in our dataframe
# The 'columns' parameter is used to check the column names of the dataframe.
weather_df.columns
# ## Now shape
# The 'shape' parameter is used to check the dimensions of the dataframe, i.e., number of rows and columns.
weather_df.shape
# **Describing the dataframe:**
# The 'describe' parameter is used to display the summary statistics of the dataframe.
weather_df.describe()
# # Checking is there any null values in dataset
# The 'isnull' and 'any' parameters are used to check if there are any null values in the dataframe.
weather_df.isnull().any()
# ### Now lets separate the feature (i.e. temperature) to be predicted from the rest of the featured. weather_x stores the rest of the dataset while weather_y has temperature column.
# Here, a new dataframe 'weather_df_num' is created with only the features that are required for the prediction. The 'loc' parameter is used to select the required columns, i.e., 'maxtempC', 'mintempC', 'cloudcover', 'humidity', 'tempC', 'sunHour', 'HeatIndexC', 'precipMM', 'pressure', and 'windspeedKmph'. The 'head' parameter is used to display the first five rows of the new dataframe.
weather_df_num = weather_df.loc[
:,
[
"maxtempC",
"mintempC",
"cloudcover",
"humidity",
"tempC",
"sunHour",
"HeatIndexC",
"precipMM",
"pressure",
"windspeedKmph",
],
]
weather_df_num.head()
# # Shape of new dataframe
# The 'shape' parameter is used to display the dimensions of the new dataframe.
weather_df_num.shape
# # Columns in new dataframe
# The 'columns' parameter is used to display the column names of the new dataframe.
weather_df_num.columns
# ## Ploting all the column values
# The 'plot' parameter is used to display the line plots of all the columns in the dataframe
weather_df_num.plot(subplots=True, figsize=(25, 20))
# # Ploting all the column values for 1 year
# The 'resample' parameter is used to resample the data by day and 'fillna' parameter is used to fill any missing values with the previous value. The resulting dataframe is plotted using the 'plot' parameter.
weather_df_num["2019":"2020"].resample("D").fillna(method="pad").plot(
subplots=True, figsize=(25, 20)
)
weather_df_num.hist(bins=10, figsize=(15, 15))
weth = weather_df_num["2019":"2020"]
weth.head()
weather_y = weather_df_num.pop("tempC")
weather_x = weather_df_num
# ### Now our dataset is prepared and it is ready to be fed to the model for training.it’s time to split the dataset into training and testing.
train_X, test_X, train_y, test_y = train_test_split(
weather_x, weather_y, test_size=0.2, random_state=4
)
train_X.shape
train_y.shape
# ### train_x has all the features except temperature and train_y has the corresponding temperature for those features. in supervised machine learning we first feed the model with input and associated output and then we check with a new input.
train_y.head()
# This code snippet is using three different machine learning models to predict the temperature based on given features. Let's go through the code and understand each line and its working for all three models.
# Firstly, the code imports the required libraries for the implementation of the model. The sklearn library is used to import the three different machine learning models - RandomForestRegressor, DecisionTreeRegressor, and LinearRegression. The numpy and pandas libraries are used for handling data and numerical computations. The matplotlib library is used for data visualization purposes.
# ```
# from sklearn.ensemble import RandomForestRegressor
# from sklearn.tree import DecisionTreeRegressor
# from sklearn.linear_model import LinearRegression
# import numpy as np
# import pandas as pd
# import matplotlib.pyplot as plt
# ```
# Next, the code instantiates the RandomForestRegressor model and sets the parameters such as max_depth, random_state, and n_estimators to tune the model's performance. The fit method is called on the training data to train the model.
# ```
# regr = RandomForestRegressor(max_depth=90, random_state=0, n_estimators=100)
# regr.fit(train_X, train_y)
# ```
# Similarly, the DecisionTreeRegressor model is instantiated, and the fit method is called on the training data to train the model.
# ```
# regressor = DecisionTreeRegressor(random_state=0)
# regressor.fit(train_X, train_y)
# ```
# Finally, the LinearRegression model is instantiated, and the fit method is called on the training data to train the model.
# ```
# model = LinearRegression()
# model.fit(train_X, train_y)
# ```
# After training the models, the code predicts the temperature based on the test data using each of the three models. The predict method is called on the trained models to predict the temperature.
# ```
# prediction3 = regr.predict(test_X)
# prediction2 = regressor.predict(test_X)
# prediction = model.predict(test_X)
# ```
# The code then calculates the mean absolute error for each of the three models to evaluate the models' performance.
# ```
# np.mean(np.absolute(prediction3 - test_y))
# np.mean(np.absolute(prediction2 - test_y))
# np.mean(np.absolute(prediction - test_y))
# ```
# Similarly, the code calculates the variance score for each of the three models to evaluate the models' performance.
# ```
# regr.score(test_X, test_y)
# regressor.score(test_X, test_y)
# model.score(test_X, test_y)
# ```
# The code then rounds the predicted values to two decimal places and creates a dataframe to display the actual and predicted temperature values for each of the three models.
# ```
# for i in range(len(prediction3)):
# prediction3[i] = round(prediction3[i], 2)
# pd.DataFrame({'Actual': test_y, 'Prediction': prediction3, 'diff': (test_y - prediction3)})
# for i in range(len(prediction2)):
# prediction2[i] = round(prediction2[i], 2)
# pd.DataFrame({'Actual': test_y, 'Prediction': prediction2, 'diff': (test_y - prediction2)})
# for i in range(len(prediction)):
# prediction[i] = round(prediction[i], 2)
# pd.DataFrame({'Actual': test_y, 'Prediction': prediction, 'diff': (test_y - prediction)})
# ```
# Lastly, the code uses the matplotlib library to visualize the scatter plot for three features - minimum temperature, heat index, and pressure.
# ```
# plt.scatter(weth.mintempC, weth.tempC)
# plt.xlabel("Minimum Temperature")
# plt.ylabel("Temperature")
# plt.show()
# plt.scatter(weth.HeatIndexC, weth.tempC)
# plt.xlabel("Heat Index")
# plt.ylabel("Temperature")
# plt.show()
# plt.scatter(weth.pressure, weth.tempC)
# plt.xlabel("Minimum Temperature")
# plt.ylabel("Temperature")
# plt.show()
# # Multiple Linear Regression
# the code uses the matplotlib library to visualize the scatter plot for three features - minimum temperature, heat index, and pressure.
# # How Multiple regression model algo. work
# In a multiple regression model, **the predicted output variable (often denoted as Y)** is a **linear function of the input variables (often denoted as X1, X2, X3, ..., Xp)**, with an **additional constant term (often denoted as β0)** included as an intercept. The equation for the model can be written as:
# **Y = β0 + β1X1 + β2X2 + β3X3 + ... + βpXp + ε**
# where:- **Y** is the predicted output variable (often called the dependent variable or response variable).- **β0** is the intercept, which represents the predicted value of Y when all input variables are zero.- **β1, β2, β3, ..., βp** are the coefficients or weights assigned to each input variable, which represent the change in the predicted value of Y associated with a one-unit change in the corresponding input variable.- **X1, X2, X3, ..., Xp** are the input variables (often called the independent variables or predictors).- ε is the error term, which represents the random variability or noise in the relationship between the input variables and the output variable.
# To estimate the values of the coefficients **β0, β1, β2, β3, ..., βp**, we use a method called least squares regression, which minimizes the sum of squared errors between the predicted and actual output values in the training data. This involves solving a system of linear equations to find the values of the coefficients that minimize the residual sum of squares (RSS).
# Once the coefficients are estimated, we can use the model to make predictions on new data by plugging in the input values and solving for the predicted output value using the above equation.
plt.scatter(weth.mintempC, weth.tempC)
plt.xlabel("Minimum Temperature")
plt.ylabel("Temperature")
plt.show()
plt.scatter(weth.HeatIndexC, weth.tempC)
plt.xlabel("Heat Index")
plt.ylabel("Temperature")
plt.show()
plt.scatter(weth.pressure, weth.tempC)
plt.xlabel("Minimum Temperature")
plt.ylabel("Temperature")
plt.show()
plt.scatter(weth.mintempC, weth.tempC)
plt.xlabel("Minimum Temperature")
plt.ylabel("Temperature")
plt.show()
model = LinearRegression()
model.fit(train_X, train_y)
prediction = model.predict(test_X)
# calculating error
np.mean(np.absolute(prediction - test_y))
print("Variance score: %.2f" % model.score(test_X, test_y))
import plotly.graph_objs as go
for i in range(len(prediction)):
prediction[i] = round(prediction[i], 2)
results = pd.DataFrame(
{"Actual": test_y, "Prediction": prediction, "Difference": (test_y - prediction)}
)
fig = go.Figure(
data=[
go.Table(
header=dict(values=list(results.columns)),
cells=dict(values=[results.Actual, results.Prediction, results.Difference]),
)
]
)
fig.show()
# # Decision Tree Regression
# **Let's denote the input variables as X = [X1, X2, ..., Xp]** and the **output variable as Y**. A decision tree can be represented by a set of binary decision rules that partition the input space into non-overlapping regions. For each region, we assign a constant value that represents the predicted output value for all input values in that region.
# The decision rules can be represented as a series of if-then statements, where each statement tests the value of one input variable and branches the tree accordingly. For example, a simple decision tree for a single input variable X1 might have the following structure:
# **if X1 <= c1 then Y = y1**
# **else Y = y2**
# where c1 is a constant threshold value for X1, and y1 and y2 are constant values representing the predicted output value for X1 c1, respectively.
# To build a decision tree regression model, we use an algorithm to recursively partition the input space into smaller regions based on the input variables that are most predictive of the output variable. The algorithm selects the best split point for each input variable based on some criterion, such as the reduction in mean squared error or the increase in R-squared value. The tree is grown until a stopping criterion is met, such as reaching a minimum node size or a maximum tree depth.
# How Decition tree regression model algo. work
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state=0)
regressor.fit(train_X, train_y)
prediction2 = regressor.predict(test_X)
np.mean(np.absolute(prediction2 - test_y))
print("Variance score: %.2f" % regressor.score(test_X, test_y))
import plotly.graph_objs as go
for i in range(len(prediction2)):
prediction[i] = round(prediction2[i], 2)
results = pd.DataFrame(
{"Actual": test_y, "Prediction": prediction2, "Difference": (test_y - prediction2)}
)
fig = go.Figure(
data=[
go.Table(
header=dict(values=list(results.columns)),
cells=dict(values=[results.Actual, results.Prediction, results.Difference]),
)
]
)
fig.show()
# # Random Forest Regression
# Let's denote the input variables as X = [X1, X2, ..., Xp] and the output variable as Y. To build a random forest regression model, we first create a set of N decision trees T1, T2, ..., TN, where each tree is built using a bootstrap sample of the training data and a random subset of the input variables. The trees are built independently of each other, and the bootstrap sampling ensures that each tree sees a slightly different subset of the training data.
# For each tree Ti, we use the same algorithm as in decision tree regression to recursively partition the input space into smaller regions based on the input variables that are most predictive of the output variable. The tree is grown until a stopping criterion is met, such as reaching a minimum node size or a maximum tree depth.
# To make a prediction on new data, we pass the data through each of the N decision trees, and obtain a prediction for each tree. The predictions are then aggregated using some aggregation rule, such as taking the mean or median of the predictions. This final prediction represents the output of the random forest regression model.
# The aggregation rule can be chosen based on the properties of the data and the goals of the analysis. For example, taking the mean of the predictions tends to produce smoother predictions that are less sensitive to outliers, while taking the median tends to be more robust to extreme values.
# How Random forest tree regression model algo. work
from sklearn.ensemble import RandomForestRegressor
regr = RandomForestRegressor(max_depth=90, random_state=0, n_estimators=100)
regr.fit(train_X, train_y)
prediction3 = regr.predict(test_X)
np.mean(np.absolute(prediction3 - test_y))
print("Variance score: %.2f" % regr.score(test_X, test_y))
import plotly.graph_objs as go
for i in range(len(prediction3)):
prediction[i] = round(prediction3[i], 2)
results = pd.DataFrame(
{"Actual": test_y, "Prediction": prediction3, "Difference": (test_y - prediction3)}
)
fig = go.Figure(
data=[
go.Table(
header=dict(values=list(results.columns)),
cells=dict(values=[results.Actual, results.Prediction, results.Difference]),
)
]
)
fig.show()
from sklearn.metrics import r2_score
# # Calculating R2-score for Multiple Linear Regression
print("Mean absolute error: %.2f" % np.mean(np.absolute(prediction - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((prediction - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y, prediction))
# # Calculating R2-score for Decision Tree Regression
print("Mean absolute error: %.2f" % np.mean(np.absolute(prediction2 - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((prediction2 - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y, prediction2))
# # Calculating R2-score for Random Forest Regression
from sklearn.metrics import r2_score
print("Mean absolute error: %.2f" % np.mean(np.absolute(prediction3 - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((prediction3 - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y, prediction3))
| false | 1 | 4,400 | 0 | 4,614 | 4,400 |
||
129174612
|
<jupyter_start><jupyter_text>nestle
Kaggle dataset identifier: nestle
<jupyter_script># Data manipulation
# ==============================================================================
import numpy as np
import pandas as pd
from datetime import datetime
# Plots
# ==============================================================================
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
import statsmodels.api as sm
import plotly.express as px
import plotly.graph_objects as go
path_week = "/kaggle/input/nestle/week.xlsx"
df = pd.read_excel(path_week)
name_col = [
"Time",
"total_product",
"boiler1_Elec",
"boiler2_Elec",
"compress_Elec",
"chiller_Elec",
"other_Elec",
"total_Elec",
"boiler2_bio",
"boiler1_oil",
"temp",
"real_a",
]
df.rename(columns={df.columns[0]: "Time"}, inplace=True)
df["Time"] = pd.to_datetime(df["Time"])
df.columns = name_col
df["weekofyear"] = df["Time"].dt.isocalendar().week
df.head(5)
df = df.set_index("Time")
import math
def add_week_of_month(df):
df["week_in_month"] = pd.to_numeric(df.index.day / 7)
df["week_in_month"] = df["week_in_month"].apply(lambda x: math.ceil(x))
return df
df = add_week_of_month(df)
df.loc[:, "month"] = pd.Series(df.index.month, df.index)
df["boiler1_toe"] = df["boiler1_Elec"] * 0.0001543 + df["boiler1_oil"] * 0.00088
df["boiler2_toe"] = df["boiler2_Elec"] * 0.0001543 + df["boiler2_bio"] * 0.0003869
df["boiler_toe"] = df["boiler2_toe"] + df["boiler1_toe"]
df["compress_toe"] = df["compress_Elec"] * 0.0001543
df["chiller_toe"] = df["chiller_Elec"] * 0.0001543
df["other_toe"] = df["other_Elec"] * 0.0001543
df["total_toe"] = (
df["other_toe"]
+ df["chiller_toe"]
+ df["compress_toe"]
+ df["boiler2_toe"]
+ df["boiler1_toe"]
)
df["toe_product"] = df["total_toe"] / df["total_product"]
df.head(5)
# # Isolation Forest
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.covariance import EllipticEnvelope
from sklearn.ensemble import IsolationForest
from sklearn.metrics import classification_report, accuracy_score
from sklearn.metrics import f1_score
from itertools import product
data_test = ["boiler1_toe", "boiler2_toe", "chiller_toe", "compress_toe"]
outliers_fraction = [0.03, 0.04, 0.05, 0.1]
n_estimate = [100, 200, 300, 100]
max_sample = [0.6, 0.8, 0.1]
print(list(product(outliers_fraction, n_estimate, max_sample)))
for i in data_test:
dict = {"outliers_fraction": [0], "n_estimate": [0], "max_sample": [0], "f1": [0]}
restult = pd.DataFrame(dict)
data_buff = pd.DataFrame(df[i])
# data_buff.columns = ['Date','Total']
# outliers_fraction = float(.05)
scaler = StandardScaler()
np_scaled = scaler.fit_transform(data_buff.values.reshape(-1, 1))
data = pd.DataFrame(np_scaled)
for c, n, m in product(outliers_fraction, n_estimate, max_sample):
model = IsolationForest(contamination=c, n_estimators=n, max_samples=m)
model.fit(data)
data_buff["anomaly"] = model.predict(data)
# print( f1_score(df['real_a'] , data_buff['anomaly'],pos_label=-1))
dt = {
"outliers_fraction": [c],
"n_estimate": [n],
"max_sample": [m],
"f1": [f1_score(df["real_a"], data_buff["anomaly"], pos_label=-1)],
}
buff = pd.DataFrame(dt)
restult = pd.concat([restult, buff], ignore_index=True)
print(i)
print(restult.loc[restult["f1"].idxmax()])
# restult.loc[restult['f1'].idxmax()]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/174/129174612.ipynb
|
nestle
|
dngovn
|
[{"Id": 129174612, "ScriptId": 38212962, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9608077, "CreationDate": "05/11/2023 14:38:46", "VersionNumber": 7.0, "Title": "Isolation Forest", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 95.0, "LinesInsertedFromPrevious": 2.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 93.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184989058, "KernelVersionId": 129174612, "SourceDatasetVersionId": 5663209}]
|
[{"Id": 5663209, "DatasetId": 3007716, "DatasourceVersionId": 5738664, "CreatorUserId": 9608077, "LicenseName": "Unknown", "CreationDate": "05/11/2023 12:58:26", "VersionNumber": 16.0, "Title": "nestle", "Slug": "nestle", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-05-11", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3007716, "CreatorUserId": 9608077, "OwnerUserId": 9608077.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5663209.0, "CurrentDatasourceVersionId": 5738664.0, "ForumId": 3046724, "Type": 2, "CreationDate": "03/16/2023 02:28:22", "LastActivityDate": "03/16/2023", "TotalViews": 119, "TotalDownloads": 15, "TotalVotes": 0, "TotalKernels": 2}]
|
[{"Id": 9608077, "UserName": "dngovn", "DisplayName": "D\u0169ng \u0110\u00e0o V\u0103n", "RegisterDate": "02/08/2022", "PerformanceTier": 0}]
|
# Data manipulation
# ==============================================================================
import numpy as np
import pandas as pd
from datetime import datetime
# Plots
# ==============================================================================
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
import statsmodels.api as sm
import plotly.express as px
import plotly.graph_objects as go
path_week = "/kaggle/input/nestle/week.xlsx"
df = pd.read_excel(path_week)
name_col = [
"Time",
"total_product",
"boiler1_Elec",
"boiler2_Elec",
"compress_Elec",
"chiller_Elec",
"other_Elec",
"total_Elec",
"boiler2_bio",
"boiler1_oil",
"temp",
"real_a",
]
df.rename(columns={df.columns[0]: "Time"}, inplace=True)
df["Time"] = pd.to_datetime(df["Time"])
df.columns = name_col
df["weekofyear"] = df["Time"].dt.isocalendar().week
df.head(5)
df = df.set_index("Time")
import math
def add_week_of_month(df):
df["week_in_month"] = pd.to_numeric(df.index.day / 7)
df["week_in_month"] = df["week_in_month"].apply(lambda x: math.ceil(x))
return df
df = add_week_of_month(df)
df.loc[:, "month"] = pd.Series(df.index.month, df.index)
df["boiler1_toe"] = df["boiler1_Elec"] * 0.0001543 + df["boiler1_oil"] * 0.00088
df["boiler2_toe"] = df["boiler2_Elec"] * 0.0001543 + df["boiler2_bio"] * 0.0003869
df["boiler_toe"] = df["boiler2_toe"] + df["boiler1_toe"]
df["compress_toe"] = df["compress_Elec"] * 0.0001543
df["chiller_toe"] = df["chiller_Elec"] * 0.0001543
df["other_toe"] = df["other_Elec"] * 0.0001543
df["total_toe"] = (
df["other_toe"]
+ df["chiller_toe"]
+ df["compress_toe"]
+ df["boiler2_toe"]
+ df["boiler1_toe"]
)
df["toe_product"] = df["total_toe"] / df["total_product"]
df.head(5)
# # Isolation Forest
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.covariance import EllipticEnvelope
from sklearn.ensemble import IsolationForest
from sklearn.metrics import classification_report, accuracy_score
from sklearn.metrics import f1_score
from itertools import product
data_test = ["boiler1_toe", "boiler2_toe", "chiller_toe", "compress_toe"]
outliers_fraction = [0.03, 0.04, 0.05, 0.1]
n_estimate = [100, 200, 300, 100]
max_sample = [0.6, 0.8, 0.1]
print(list(product(outliers_fraction, n_estimate, max_sample)))
for i in data_test:
dict = {"outliers_fraction": [0], "n_estimate": [0], "max_sample": [0], "f1": [0]}
restult = pd.DataFrame(dict)
data_buff = pd.DataFrame(df[i])
# data_buff.columns = ['Date','Total']
# outliers_fraction = float(.05)
scaler = StandardScaler()
np_scaled = scaler.fit_transform(data_buff.values.reshape(-1, 1))
data = pd.DataFrame(np_scaled)
for c, n, m in product(outliers_fraction, n_estimate, max_sample):
model = IsolationForest(contamination=c, n_estimators=n, max_samples=m)
model.fit(data)
data_buff["anomaly"] = model.predict(data)
# print( f1_score(df['real_a'] , data_buff['anomaly'],pos_label=-1))
dt = {
"outliers_fraction": [c],
"n_estimate": [n],
"max_sample": [m],
"f1": [f1_score(df["real_a"], data_buff["anomaly"], pos_label=-1)],
}
buff = pd.DataFrame(dt)
restult = pd.concat([restult, buff], ignore_index=True)
print(i)
print(restult.loc[restult["f1"].idxmax()])
# restult.loc[restult['f1'].idxmax()]
| false | 0 | 1,263 | 0 | 1,280 | 1,263 |
||
129079987
|
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.cluster import DBSCAN
from lightgbm import LGBMClassifier
from sklearn import metrics
# Ссылка на документ со статьями: https://docs.google.com/document/d/1q4Uoee8wUVpunF7ZWaZ4oxaEqwlFsBW2/edit?usp=sharing&ouid=101393046997272752391&rtpof=true&sd=true
train_df = pd.read_csv("train_dataset.csv", index_col=0)
train_df.shape
train_df.head()
features = train_df.drop("label", axis=1).columns
X, y = train_df[features], train_df["label"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, stratify=y, random_state=42
)
# LGBM использовала для выбора признаковю Делала аналогично классификации.
model_LGBM_bt1 = LGBMClassifier(max_depth=16, n_estimators=100)
y_train_bt1 = (y_train == "brain_type1").astype(int)
LGBM_train_bt1 = model_LGBM_bt1.fit(X_train, y_train_bt1)
importance_df_bt1 = (
pd.DataFrame(
{
"feature_name": LGBM_train_bt1.feature_name_,
"importance_gain": LGBM_train_bt1.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_bt1.iloc[0:10])
list_bt1 = list(importance_df_bt1.iloc[0:200, 0])
print(list_bt1)
y_test_pred_bt1 = model_LGBM_bt1.predict(X_test)
y_test_bt1 = (y_test == "brain_type1").astype(int)
metrics.f1_score(y_test_pred_bt1, y_test_bt1)
model_LGBM_bt2 = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_bt2 = (y_train == "brain_type2").astype(int)
LGBM_train_bt2 = model_LGBM_bt2.fit(X_train, y_train_bt2)
importance_df_bt2 = (
pd.DataFrame(
{
"feature_name": LGBM_train_bt2.feature_name_,
"importance_gain": LGBM_train_bt2.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_bt2.iloc[0:10])
list_bt2 = list(importance_df_bt2.iloc[0:200, 0])
print(list_bt2)
y_test_pred_bt2 = model_LGBM_bt2.predict(X_test)
y_test_bt2 = (y_test == "brain_type2").astype(int)
metrics.f1_score(y_test_pred_bt2, y_test_bt2)
model_LGBM_bt3 = LGBMClassifier(max_depth=10, n_estimators=75)
y_train_bt3 = (y_train == "brain_type3").astype(int)
LGBM_train_bt3 = model_LGBM_bt3.fit(X_train, y_train_bt3)
importance_df_bt3 = (
pd.DataFrame(
{
"feature_name": LGBM_train_bt3.feature_name_,
"importance_gain": LGBM_train_bt3.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_bt3.iloc[0:10])
list_bt3 = list(importance_df_bt3.iloc[0:200, 0])
print(list_bt3)
y_test_pred_bt3 = model_LGBM_bt3.predict(X_test)
y_test_bt3 = (y_test == "brain_type3").astype(int)
metrics.f1_score(y_test_pred_bt3, y_test_bt3)
model_LGBM_brt1 = LGBMClassifier(max_depth=15, n_estimators=100)
y_train_brt1 = (y_train == "brest_type1").astype(int)
LGBM_train_brt1 = model_LGBM_brt1.fit(X_train, y_train_brt1)
importance_df_brt1 = (
pd.DataFrame(
{
"feature_name": LGBM_train_brt1.feature_name_,
"importance_gain": LGBM_train_brt1.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_brt1.iloc[0:10])
list_brt1 = list(importance_df_brt1.iloc[0:200, 0])
print(list_brt1)
y_test_pred_brt1 = model_LGBM_brt1.predict(X_test)
y_test_brt1 = (y_test == "brest_type1").astype(int)
metrics.f1_score(y_test_pred_brt1, y_test_brt1)
model_LGBM_brt2 = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_brt2 = (y_train == "brest_type2").astype(int)
LGBM_train_brt2 = model_LGBM_brt2.fit(X_train, y_train_brt2)
importance_df_brt2 = (
pd.DataFrame(
{
"feature_name": LGBM_train_brt2.feature_name_,
"importance_gain": LGBM_train_brt2.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_brt2.iloc[0:10])
list_brt2 = list(importance_df_brt2.iloc[0:200, 0])
print(list_brt2)
y_test_pred_brt2 = model_LGBM_brt2.predict(X_test)
y_test_brt2 = (y_test == "brest_type2").astype(int)
metrics.f1_score(y_test_pred_brt2, y_test_brt2)
model_LGBM_brt3 = LGBMClassifier(max_depth=15, n_estimators=100)
y_train_brt3 = (y_train == "brest_type3").astype(int)
LGBM_train_brt3 = model_LGBM_brt3.fit(X_train, y_train_brt3)
importance_df_brt3 = (
pd.DataFrame(
{
"feature_name": LGBM_train_brt3.feature_name_,
"importance_gain": LGBM_train_brt3.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_brt3.iloc[0:10])
list_brt3 = list(importance_df_brt3.iloc[0:200, 0])
print(list_brt3)
y_test_pred_brt3 = model_LGBM_brt3.predict(X_test)
y_test_brt3 = (y_test == "brest_type3").astype(int)
metrics.f1_score(y_test_pred_brt3, y_test_brt3)
model_LGBM_col = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_col = (y_train == "colorectal").astype(int)
LGBM_train_col = model_LGBM_col.fit(X_train, y_train_col)
importance_df_col = (
pd.DataFrame(
{
"feature_name": LGBM_train_col.feature_name_,
"importance_gain": LGBM_train_col.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_col.iloc[0:10])
list_col = list(importance_df_col.iloc[0:200, 0])
print(list_col)
y_test_pred_col = model_LGBM_col.predict(X_test)
y_test_col = (y_test == "colorectal").astype(int)
metrics.f1_score(y_test_pred_col, y_test_col)
model_LGBM_es = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_es = (y_train == "esophageal").astype(int)
LGBM_train_es = model_LGBM_es.fit(X_train, y_train_es)
importance_df_es = (
pd.DataFrame(
{
"feature_name": LGBM_train_es.feature_name_,
"importance_gain": LGBM_train_es.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_es.iloc[0:10])
list_es = list(importance_df_es.iloc[0:200, 0])
print(list_es)
y_test_pred_es = model_LGBM_es.predict(X_test)
y_test_es = (y_test == "esophageal").astype(int)
metrics.f1_score(y_test_pred_es, y_test_es)
columns = list(
set(
list_es
+ list_col
+ list_brt1
+ list_brt2
+ list_brt3
+ list_bt1
+ list_bt2
+ list_bt3
)
)
print(len(columns))
print(columns)
X_col = pd.DataFrame(data=X, columns=columns)
X_col
label_mapping = {k: num for num, k in enumerate(y.unique())}
label_mapping
y = [label_mapping[item] for item in y]
y
X_train_col, X_test_col, y_train_col, y_test_col = train_test_split(
X_col, y, test_size=0.2, stratify=y, random_state=42
)
# k-близжайших соседей
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train_col, y_train_col)
y_test_pred_knn = knn.predict(X_test_col)
metrics.f1_score(y_test_pred_knn, y_test_col, average="weighted")
from sklearn.preprocessing import StandardScaler
# создадим объект класса StandardScaler
scaler = StandardScaler()
scaler.fit(X_train_col)
# трансформируем датасеты train_x и test_x
train_x_scaler = scaler.transform(X_train_col)
test_x_scaler = scaler.transform(X_test_col)
knns = KNeighborsClassifier(n_neighbors=4)
knns.fit(train_x_scaler, y_train_col)
y_pred_knn_scaler = knns.predict(test_x_scaler)
metrics.f1_score(y_pred_knn_scaler, y_test_col, average="weighted")
# импортируем класс PCA
from sklearn.decomposition import PCA
# создадим объект класса PCA
pca = PCA(n_components=40, random_state=42)
pca.fit(train_x_scaler)
# уменьшим размерность данных
train_x_pca = pca.transform(train_x_scaler)
test_x_pca = pca.transform(test_x_scaler)
knn2 = KNeighborsClassifier(n_neighbors=4)
knn2.fit(train_x_pca, y_train_col)
y_pred_knn_pca = knn2.predict(test_x_pca)
metrics.f1_score(y_pred_knn_pca, y_test_col, average="weighted")
test_df = pd.read_csv("test_dataset.csv", index_col=0)
test_df.shape
test_df.head()
X_test_df = pd.DataFrame(data=test_df, columns=columns)
X_test_df_scaler = scaler.transform(X_test_df)
X_test_df_pca = pca.transform(X_test_df_scaler)
predictions = knn2.predict(X_test_df_pca)
predictions_df = pd.DataFrame(
data=predictions, index=test_df.index, columns=["Predicted"]
)
predictions_df
predictions_df.index.name = "Id"
predictions_df.head()
predictions_df["Predicted"].map({v: k for k, v in label_mapping.items()})
predictions_df["Predicted"] = predictions_df["Predicted"].map(
{v: k for k, v in label_mapping.items()}
)
predictions_df
predictions_df.to_csv("submission_cluster 2.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/079/129079987.ipynb
| null | null |
[{"Id": 129079987, "ScriptId": 38372096, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13032224, "CreationDate": "05/10/2023 20:45:49", "VersionNumber": 1.0, "Title": "\u0414\u0436\u0430\u0440\u0443\u043b\u043b\u0430\u0435\u0432\u0430 \u0410\u0428 + \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442 \u0441\u043e \u0441\u0442\u0430\u0442\u044c\u044f\u043c\u0438", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 283.0, "LinesInsertedFromPrevious": 283.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.cluster import DBSCAN
from lightgbm import LGBMClassifier
from sklearn import metrics
# Ссылка на документ со статьями: https://docs.google.com/document/d/1q4Uoee8wUVpunF7ZWaZ4oxaEqwlFsBW2/edit?usp=sharing&ouid=101393046997272752391&rtpof=true&sd=true
train_df = pd.read_csv("train_dataset.csv", index_col=0)
train_df.shape
train_df.head()
features = train_df.drop("label", axis=1).columns
X, y = train_df[features], train_df["label"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, stratify=y, random_state=42
)
# LGBM использовала для выбора признаковю Делала аналогично классификации.
model_LGBM_bt1 = LGBMClassifier(max_depth=16, n_estimators=100)
y_train_bt1 = (y_train == "brain_type1").astype(int)
LGBM_train_bt1 = model_LGBM_bt1.fit(X_train, y_train_bt1)
importance_df_bt1 = (
pd.DataFrame(
{
"feature_name": LGBM_train_bt1.feature_name_,
"importance_gain": LGBM_train_bt1.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_bt1.iloc[0:10])
list_bt1 = list(importance_df_bt1.iloc[0:200, 0])
print(list_bt1)
y_test_pred_bt1 = model_LGBM_bt1.predict(X_test)
y_test_bt1 = (y_test == "brain_type1").astype(int)
metrics.f1_score(y_test_pred_bt1, y_test_bt1)
model_LGBM_bt2 = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_bt2 = (y_train == "brain_type2").astype(int)
LGBM_train_bt2 = model_LGBM_bt2.fit(X_train, y_train_bt2)
importance_df_bt2 = (
pd.DataFrame(
{
"feature_name": LGBM_train_bt2.feature_name_,
"importance_gain": LGBM_train_bt2.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_bt2.iloc[0:10])
list_bt2 = list(importance_df_bt2.iloc[0:200, 0])
print(list_bt2)
y_test_pred_bt2 = model_LGBM_bt2.predict(X_test)
y_test_bt2 = (y_test == "brain_type2").astype(int)
metrics.f1_score(y_test_pred_bt2, y_test_bt2)
model_LGBM_bt3 = LGBMClassifier(max_depth=10, n_estimators=75)
y_train_bt3 = (y_train == "brain_type3").astype(int)
LGBM_train_bt3 = model_LGBM_bt3.fit(X_train, y_train_bt3)
importance_df_bt3 = (
pd.DataFrame(
{
"feature_name": LGBM_train_bt3.feature_name_,
"importance_gain": LGBM_train_bt3.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_bt3.iloc[0:10])
list_bt3 = list(importance_df_bt3.iloc[0:200, 0])
print(list_bt3)
y_test_pred_bt3 = model_LGBM_bt3.predict(X_test)
y_test_bt3 = (y_test == "brain_type3").astype(int)
metrics.f1_score(y_test_pred_bt3, y_test_bt3)
model_LGBM_brt1 = LGBMClassifier(max_depth=15, n_estimators=100)
y_train_brt1 = (y_train == "brest_type1").astype(int)
LGBM_train_brt1 = model_LGBM_brt1.fit(X_train, y_train_brt1)
importance_df_brt1 = (
pd.DataFrame(
{
"feature_name": LGBM_train_brt1.feature_name_,
"importance_gain": LGBM_train_brt1.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_brt1.iloc[0:10])
list_brt1 = list(importance_df_brt1.iloc[0:200, 0])
print(list_brt1)
y_test_pred_brt1 = model_LGBM_brt1.predict(X_test)
y_test_brt1 = (y_test == "brest_type1").astype(int)
metrics.f1_score(y_test_pred_brt1, y_test_brt1)
model_LGBM_brt2 = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_brt2 = (y_train == "brest_type2").astype(int)
LGBM_train_brt2 = model_LGBM_brt2.fit(X_train, y_train_brt2)
importance_df_brt2 = (
pd.DataFrame(
{
"feature_name": LGBM_train_brt2.feature_name_,
"importance_gain": LGBM_train_brt2.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_brt2.iloc[0:10])
list_brt2 = list(importance_df_brt2.iloc[0:200, 0])
print(list_brt2)
y_test_pred_brt2 = model_LGBM_brt2.predict(X_test)
y_test_brt2 = (y_test == "brest_type2").astype(int)
metrics.f1_score(y_test_pred_brt2, y_test_brt2)
model_LGBM_brt3 = LGBMClassifier(max_depth=15, n_estimators=100)
y_train_brt3 = (y_train == "brest_type3").astype(int)
LGBM_train_brt3 = model_LGBM_brt3.fit(X_train, y_train_brt3)
importance_df_brt3 = (
pd.DataFrame(
{
"feature_name": LGBM_train_brt3.feature_name_,
"importance_gain": LGBM_train_brt3.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_brt3.iloc[0:10])
list_brt3 = list(importance_df_brt3.iloc[0:200, 0])
print(list_brt3)
y_test_pred_brt3 = model_LGBM_brt3.predict(X_test)
y_test_brt3 = (y_test == "brest_type3").astype(int)
metrics.f1_score(y_test_pred_brt3, y_test_brt3)
model_LGBM_col = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_col = (y_train == "colorectal").astype(int)
LGBM_train_col = model_LGBM_col.fit(X_train, y_train_col)
importance_df_col = (
pd.DataFrame(
{
"feature_name": LGBM_train_col.feature_name_,
"importance_gain": LGBM_train_col.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_col.iloc[0:10])
list_col = list(importance_df_col.iloc[0:200, 0])
print(list_col)
y_test_pred_col = model_LGBM_col.predict(X_test)
y_test_col = (y_test == "colorectal").astype(int)
metrics.f1_score(y_test_pred_col, y_test_col)
model_LGBM_es = LGBMClassifier(max_depth=10, n_estimators=50)
y_train_es = (y_train == "esophageal").astype(int)
LGBM_train_es = model_LGBM_es.fit(X_train, y_train_es)
importance_df_es = (
pd.DataFrame(
{
"feature_name": LGBM_train_es.feature_name_,
"importance_gain": LGBM_train_es.feature_importances_,
}
)
.sort_values("importance_gain", ascending=False)
.reset_index(drop=True)
)
print(importance_df_es.iloc[0:10])
list_es = list(importance_df_es.iloc[0:200, 0])
print(list_es)
y_test_pred_es = model_LGBM_es.predict(X_test)
y_test_es = (y_test == "esophageal").astype(int)
metrics.f1_score(y_test_pred_es, y_test_es)
columns = list(
set(
list_es
+ list_col
+ list_brt1
+ list_brt2
+ list_brt3
+ list_bt1
+ list_bt2
+ list_bt3
)
)
print(len(columns))
print(columns)
X_col = pd.DataFrame(data=X, columns=columns)
X_col
label_mapping = {k: num for num, k in enumerate(y.unique())}
label_mapping
y = [label_mapping[item] for item in y]
y
X_train_col, X_test_col, y_train_col, y_test_col = train_test_split(
X_col, y, test_size=0.2, stratify=y, random_state=42
)
# k-близжайших соседей
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train_col, y_train_col)
y_test_pred_knn = knn.predict(X_test_col)
metrics.f1_score(y_test_pred_knn, y_test_col, average="weighted")
from sklearn.preprocessing import StandardScaler
# создадим объект класса StandardScaler
scaler = StandardScaler()
scaler.fit(X_train_col)
# трансформируем датасеты train_x и test_x
train_x_scaler = scaler.transform(X_train_col)
test_x_scaler = scaler.transform(X_test_col)
knns = KNeighborsClassifier(n_neighbors=4)
knns.fit(train_x_scaler, y_train_col)
y_pred_knn_scaler = knns.predict(test_x_scaler)
metrics.f1_score(y_pred_knn_scaler, y_test_col, average="weighted")
# импортируем класс PCA
from sklearn.decomposition import PCA
# создадим объект класса PCA
pca = PCA(n_components=40, random_state=42)
pca.fit(train_x_scaler)
# уменьшим размерность данных
train_x_pca = pca.transform(train_x_scaler)
test_x_pca = pca.transform(test_x_scaler)
knn2 = KNeighborsClassifier(n_neighbors=4)
knn2.fit(train_x_pca, y_train_col)
y_pred_knn_pca = knn2.predict(test_x_pca)
metrics.f1_score(y_pred_knn_pca, y_test_col, average="weighted")
test_df = pd.read_csv("test_dataset.csv", index_col=0)
test_df.shape
test_df.head()
X_test_df = pd.DataFrame(data=test_df, columns=columns)
X_test_df_scaler = scaler.transform(X_test_df)
X_test_df_pca = pca.transform(X_test_df_scaler)
predictions = knn2.predict(X_test_df_pca)
predictions_df = pd.DataFrame(
data=predictions, index=test_df.index, columns=["Predicted"]
)
predictions_df
predictions_df.index.name = "Id"
predictions_df.head()
predictions_df["Predicted"].map({v: k for k, v in label_mapping.items()})
predictions_df["Predicted"] = predictions_df["Predicted"].map(
{v: k for k, v in label_mapping.items()}
)
predictions_df
predictions_df.to_csv("submission_cluster 2.csv")
| false | 0 | 3,618 | 0 | 3,618 | 3,618 |
||
129079916
|
<jupyter_start><jupyter_text>Red Wine Quality
### Context
The two datasets are related to red and white variants of the Portuguese "Vinho Verde" wine. For more details, consult the reference [Cortez et al., 2009]. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.).
These datasets can be viewed as classification or regression tasks. The classes are ordered and not balanced (e.g. there are much more normal wines than excellent or poor ones).
---
*This dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. (If I am mistaken and the public license type disallowed me from doing so, I will take this down if requested.)*
### Content
For more information, read [Cortez et al., 2009].<br>
Input variables (based on physicochemical tests):<br>
1 - fixed acidity <br>
2 - volatile acidity <br>
3 - citric acid <br>
4 - residual sugar <br>
5 - chlorides <br>
6 - free sulfur dioxide <br>
7 - total sulfur dioxide <br>
8 - density <br>
9 - pH <br>
10 - sulphates <br>
11 - alcohol <br>
Output variable (based on sensory data): <br>
12 - quality (score between 0 and 10) <br>
### Tips
What might be an interesting thing to do, is aside from using regression modelling, is to set an arbitrary cutoff for your dependent variable (wine quality) at e.g. 7 or higher getting classified as 'good/1' and the remainder as 'not good/0'.
This allows you to practice with hyper parameter tuning on e.g. decision tree algorithms looking at the ROC curve and the AUC value.
Without doing any kind of feature engineering or overfitting you should be able to get an AUC of .88 (without even using random forest algorithm)
**KNIME** is a great tool (GUI) that can be used for this.<br>
1 - File Reader (for csv) to linear correlation node and to interactive histogram for basic EDA.<br>
2- File Reader to 'Rule Engine Node' to turn the 10 point scale to dichtome variable (good wine and rest), the code to put in the rule engine is something like this:<br>
- **$quality$ > 6.5 => "good"**<br>
- **TRUE => "bad"** <br>
3- Rule Engine Node output to input of Column Filter node to filter out your original 10point feature (this prevent leaking)<br>
4- Column Filter Node output to input of Partitioning Node (your standard train/tes split, e.g. 75%/25%, choose 'random' or 'stratified')<br>
5- Partitioning Node train data split output to input of Train data split to input Decision Tree Learner node and <br>
6- Partitioning Node test data split output to input Decision Tree predictor Node<br>
7- Decision Tree learner Node output to input Decision Tree Node input<br>
8- Decision Tree output to input ROC Node.. (here you can evaluate your model base on AUC value)<br>
### Inspiration
Use machine learning to determine which physiochemical properties make a wine 'good'!
Kaggle dataset identifier: red-wine-quality-cortez-et-al-2009
<jupyter_script># # Importing the Libraries
import numpy as np # to create numpy arrays
import pandas as pd # to create pandas dataframe
import matplotlib.pyplot as plt # for making plots and graphs
import seaborn as sns # for data visualization
import warnings
warnings.filterwarnings("ignore")
from sklearn.model_selection import (
train_test_split,
) # to split data into training data and testing data
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score # to evaluate the model
# # Data Collection
# loading the dataset to a pandas dataframe
wine_dataset = pd.read_csv(
"/kaggle/input/red-wine-quality-cortez-et-al-2009/winequality-red.csv"
)
# checking the first 5 rows of the dataset
wine_dataset.head()
# checking number of rows and columns in th dataset'
wine_dataset.shape
# getting some information about the dataset
wine_dataset.info()
# checking for missing values in each column
wine_dataset.isnull().sum()
# > We don't have any missing values in our dataset
# # Data Analysis and Visualization
# getting statistical measures of the dataset
wine_dataset.describe()
# finding the number of values for each quality
sns.catplot(x="quality", data=wine_dataset, kind="count")
# volatile acidity vs quality
plot = plt.figure(figsize=(5, 5))
sns.barplot(x="quality", y="volatile acidity", data=wine_dataset)
# > 'volatile acidity' and 'quality' are inversely proportional
# citric acid vs quality
plot = plt.figure(figsize=(5, 5))
sns.barplot(x="quality", y="citric acid", data=wine_dataset)
# > Here, when the 'citric acid' content is more then we're getting high 'quality' of wine.
# checking the distribution of the data
wine_dataset.hist(bins=100, figsize=(10, 10))
plt.show()
# # Correlation
# correlation between all the columns to the quality column
correlation = wine_dataset.corr()
# constructing a heatmap to understand the correlation between the columns
plt.figure(figsize=(10, 7))
sns.heatmap(correlation, annot=True)
# printing correlation values
wine_dataset.corr()["quality"].sort_values()
# > 'alcohol' has higher correlation with target --quality
# # Data Preprocessing
# separating the features and label
X = wine_dataset.drop("quality", axis=1)
print(X.head(2))
# **Label Binarization**
Y = wine_dataset["quality"].apply(lambda y_value: 1 if y_value >= 6.5 else 0)
print(Y)
# > So here we have classified the different wine quality ratings to 0 and 1 --GOOD and BAD
# # Train & Test Split
# splitting X,Y into training and testing data
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, random_state=3
) # assigned 20% for test
print(X.shape, X_train.shape, X_test.shape)
# # Model Training
# **Model 1 - Logistic Regression**
logreg = LogisticRegression()
# training the model with training data
logreg.fit(X_train, Y_train)
# model evaluation
logreg_pred = logreg.predict(X_test)
logreg_acc = accuracy_score(logreg_pred, Y_test)
print("Test accuracy score is: ", logreg_acc * 100)
# **Model 2 - Decision Tree Model**
dtree = DecisionTreeClassifier()
# training the model
dtree.fit(X_train, Y_train)
# model evaluation
dtree_pred = dtree.predict(X_test)
dtree_acc = accuracy_score(dtree_pred, Y_test)
print("Test Accuracy score is:", dtree_acc * 100)
# **Model 3 - Random Forest Classifier**
rforest = RandomForestClassifier()
# training the model
rforest.fit(X_train, Y_train)
# model evaluation
rforest_pred = rforest.predict(X_test)
rforest_acc = accuracy_score(rforest_pred, Y_test)
print("Test Accuracy score is:", rforest_acc * 100)
# > Conclusion:
# > Random Forest has better accuracy than other two models (Logistic Regression and Decision Tree)
# # Building a Predictive System
input_data = (7.3, 0.65, 0.0, 1.2, 0.065, 15.0, 21.0, 0.9946, 3.39, 0.47, 10.0)
# input_data = (7.5,0.5,0.36,6.1,0.071,17.0,102.0,0.9978,3.35,0.8,10.5)
# changing the input data to a numpy array
input_data_as_numpy_array = np.asarray(input_data)
# reshaping the data as we are predicting the label for only one instance
input_data_reshaped = input_data_as_numpy_array.reshape(1, -1)
prediction = rforest.predict(input_data_reshaped)
print(prediction)
if prediction[0] == 1:
print("Good Quality Wine!")
else:
print("Bad Quality Wine")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/079/129079916.ipynb
|
red-wine-quality-cortez-et-al-2009
| null |
[{"Id": 129079916, "ScriptId": 38353133, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14119393, "CreationDate": "05/10/2023 20:44:41", "VersionNumber": 3.0, "Title": "Wine Quality Prediction", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 162.0, "LinesInsertedFromPrevious": 76.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 86.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184816396, "KernelVersionId": 129079916, "SourceDatasetVersionId": 8204}]
|
[{"Id": 8204, "DatasetId": 4458, "DatasourceVersionId": 8204, "CreatorUserId": 1132983, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "11/27/2017 23:41:08", "VersionNumber": 2.0, "Title": "Red Wine Quality", "Slug": "red-wine-quality-cortez-et-al-2009", "Subtitle": "Simple and clean practice dataset for regression or classification modelling", "Description": "### Context\n\nThe two datasets are related to red and white variants of the Portuguese \"Vinho Verde\" wine. For more details, consult the reference [Cortez et al., 2009]. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.). \n\nThese datasets can be viewed as classification or regression tasks. The classes are ordered and not balanced (e.g. there are much more normal wines than excellent or poor ones). \n\n---\n*This dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. (If I am mistaken and the public license type disallowed me from doing so, I will take this down if requested.)*\n\n\n### Content\n\nFor more information, read [Cortez et al., 2009].<br>\nInput variables (based on physicochemical tests):<br>\n1 - fixed acidity <br>\n2 - volatile acidity <br>\n3 - citric acid <br>\n4 - residual sugar <br>\n5 - chlorides <br>\n6 - free sulfur dioxide <br> \n7 - total sulfur dioxide <br>\n8 - density <br>\n9 - pH <br>\n10 - sulphates <br>\n11 - alcohol <br>\nOutput variable (based on sensory data): <br>\n12 - quality (score between 0 and 10) <br>\n\n### Tips\nWhat might be an interesting thing to do, is aside from using regression modelling, is to set an arbitrary cutoff for your dependent variable (wine quality) at e.g. 7 or higher getting classified as 'good/1' and the remainder as 'not good/0'.\nThis allows you to practice with hyper parameter tuning on e.g. decision tree algorithms looking at the ROC curve and the AUC value.\nWithout doing any kind of feature engineering or overfitting you should be able to get an AUC of .88 (without even using random forest algorithm)\n\n**KNIME** is a great tool (GUI) that can be used for this.<br>\n1 - File Reader (for csv) to linear correlation node and to interactive histogram for basic EDA.<br>\n2- File Reader to 'Rule Engine Node' to turn the 10 point scale to dichtome variable (good wine and rest), the code to put in the rule engine is something like this:<br>\n - **$quality$ > 6.5 => \"good\"**<br>\n - **TRUE => \"bad\"** <br>\n3- Rule Engine Node output to input of Column Filter node to filter out your original 10point feature (this prevent leaking)<br>\n4- Column Filter Node output to input of Partitioning Node (your standard train/tes split, e.g. 75%/25%, choose 'random' or 'stratified')<br>\n5- Partitioning Node train data split output to input of Train data split to input Decision Tree Learner node and <br>\n6- Partitioning Node test data split output to input Decision Tree predictor Node<br>\n7- Decision Tree learner Node output to input Decision Tree Node input<br>\n8- Decision Tree output to input ROC Node.. (here you can evaluate your model base on AUC value)<br>\n\n\n### Inspiration\nUse machine learning to determine which physiochemical properties make a wine 'good'!\n\n\n\n### Acknowledgements\n\nThis dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. *(I am mistaken and the public license type disallowed me from doing so, I will take this down at first request. I am not the owner of this dataset.*\n\n**Please include this citation if you plan to use this database: \nP. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. \nModeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.**\n\n### Relevant publication\n\nP. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. \nIn Decision Support Systems, Elsevier, 47(4):547-553, 2009.", "VersionNotes": "Fixed csv format to use comma as delimiter", "TotalCompressedBytes": 100951.0, "TotalUncompressedBytes": 100951.0}]
|
[{"Id": 4458, "CreatorUserId": 1132983, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 8204.0, "CurrentDatasourceVersionId": 8204.0, "ForumId": 10170, "Type": 2, "CreationDate": "11/12/2017 14:08:43", "LastActivityDate": "02/06/2018", "TotalViews": 1214229, "TotalDownloads": 194418, "TotalVotes": 2537, "TotalKernels": 1574}]
| null |
# # Importing the Libraries
import numpy as np # to create numpy arrays
import pandas as pd # to create pandas dataframe
import matplotlib.pyplot as plt # for making plots and graphs
import seaborn as sns # for data visualization
import warnings
warnings.filterwarnings("ignore")
from sklearn.model_selection import (
train_test_split,
) # to split data into training data and testing data
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score # to evaluate the model
# # Data Collection
# loading the dataset to a pandas dataframe
wine_dataset = pd.read_csv(
"/kaggle/input/red-wine-quality-cortez-et-al-2009/winequality-red.csv"
)
# checking the first 5 rows of the dataset
wine_dataset.head()
# checking number of rows and columns in th dataset'
wine_dataset.shape
# getting some information about the dataset
wine_dataset.info()
# checking for missing values in each column
wine_dataset.isnull().sum()
# > We don't have any missing values in our dataset
# # Data Analysis and Visualization
# getting statistical measures of the dataset
wine_dataset.describe()
# finding the number of values for each quality
sns.catplot(x="quality", data=wine_dataset, kind="count")
# volatile acidity vs quality
plot = plt.figure(figsize=(5, 5))
sns.barplot(x="quality", y="volatile acidity", data=wine_dataset)
# > 'volatile acidity' and 'quality' are inversely proportional
# citric acid vs quality
plot = plt.figure(figsize=(5, 5))
sns.barplot(x="quality", y="citric acid", data=wine_dataset)
# > Here, when the 'citric acid' content is more then we're getting high 'quality' of wine.
# checking the distribution of the data
wine_dataset.hist(bins=100, figsize=(10, 10))
plt.show()
# # Correlation
# correlation between all the columns to the quality column
correlation = wine_dataset.corr()
# constructing a heatmap to understand the correlation between the columns
plt.figure(figsize=(10, 7))
sns.heatmap(correlation, annot=True)
# printing correlation values
wine_dataset.corr()["quality"].sort_values()
# > 'alcohol' has higher correlation with target --quality
# # Data Preprocessing
# separating the features and label
X = wine_dataset.drop("quality", axis=1)
print(X.head(2))
# **Label Binarization**
Y = wine_dataset["quality"].apply(lambda y_value: 1 if y_value >= 6.5 else 0)
print(Y)
# > So here we have classified the different wine quality ratings to 0 and 1 --GOOD and BAD
# # Train & Test Split
# splitting X,Y into training and testing data
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, random_state=3
) # assigned 20% for test
print(X.shape, X_train.shape, X_test.shape)
# # Model Training
# **Model 1 - Logistic Regression**
logreg = LogisticRegression()
# training the model with training data
logreg.fit(X_train, Y_train)
# model evaluation
logreg_pred = logreg.predict(X_test)
logreg_acc = accuracy_score(logreg_pred, Y_test)
print("Test accuracy score is: ", logreg_acc * 100)
# **Model 2 - Decision Tree Model**
dtree = DecisionTreeClassifier()
# training the model
dtree.fit(X_train, Y_train)
# model evaluation
dtree_pred = dtree.predict(X_test)
dtree_acc = accuracy_score(dtree_pred, Y_test)
print("Test Accuracy score is:", dtree_acc * 100)
# **Model 3 - Random Forest Classifier**
rforest = RandomForestClassifier()
# training the model
rforest.fit(X_train, Y_train)
# model evaluation
rforest_pred = rforest.predict(X_test)
rforest_acc = accuracy_score(rforest_pred, Y_test)
print("Test Accuracy score is:", rforest_acc * 100)
# > Conclusion:
# > Random Forest has better accuracy than other two models (Logistic Regression and Decision Tree)
# # Building a Predictive System
input_data = (7.3, 0.65, 0.0, 1.2, 0.065, 15.0, 21.0, 0.9946, 3.39, 0.47, 10.0)
# input_data = (7.5,0.5,0.36,6.1,0.071,17.0,102.0,0.9978,3.35,0.8,10.5)
# changing the input data to a numpy array
input_data_as_numpy_array = np.asarray(input_data)
# reshaping the data as we are predicting the label for only one instance
input_data_reshaped = input_data_as_numpy_array.reshape(1, -1)
prediction = rforest.predict(input_data_reshaped)
print(prediction)
if prediction[0] == 1:
print("Good Quality Wine!")
else:
print("Bad Quality Wine")
| false | 0 | 1,379 | 0 | 2,247 | 1,379 |
||
129533101
|
# ## Imports
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import nltk
from tqdm import tqdm
from matplotlib import pyplot as plt
train = pd.read_csv("/kaggle/input/nlp-getting-started/train.csv")
test = pd.read_csv("/kaggle/input/nlp-getting-started/test.csv")
sub = pd.read_csv("/kaggle/input/nlp-getting-started/sample_submission.csv")
train.head(3)
# ## Transformer
from transformers import (
AutoModel,
AutoTokenizer,
AutoConfig,
get_cosine_schedule_with_warmup,
)
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.optim import AdamW
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class CFG:
max_len = 512
batch_size = 32
epochs = 1
model_str = "xlm-roberta-base"
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
encoder_lr = 1e-5
decoder_lr = 1e-4
eps = 1e-6
betas = (0.9, 0.999)
weight_decay = 0.01
max_grad_norm = 0.012
num_cycles = 0.5
warmup_ratio = 0.1
max_grad_norm = 0.012
def prepare_inputs(inputs, max_len):
inputs = CFG.tokenizer.encode_plus(
inputs,
return_tensors=None,
add_special_tokens=True,
max_length=max_len,
truncation=True,
pad_to_max_length=True,
)
return {k: torch.tensor(v, dtype=torch.long).to(device) for k, v in inputs.items()}
class CustomDataset:
def __init__(self, df, max_len, train=True):
self.text = df["text"]
self.train = train
if train == True:
self.target = df["target"]
self.max_len = max_len
def __len__(self):
return len(self.text)
def __getitem__(self, idx):
inputs = prepare_inputs(self.text[idx], self.max_len)
if self.train == True:
labels = torch.tensor(self.target[idx], dtype=torch.float).to(device)
return inputs, labels
else:
return inputs
class MeanPooling(nn.Module):
def __init__(self):
super(MeanPooling, self).__init__()
def forward(self, last_hidden_state, attention_mask):
input_mask_expanded = (
attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float()
)
sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
mean_embeddings = sum_embeddings / sum_mask
return mean_embeddings
class CustomModel(nn.Module):
def __init__(self):
super().__init__()
self.model = AutoModel.from_pretrained(CFG.model_str)
self.config = AutoConfig.from_pretrained(CFG.model_str)
self.config.hidden_dropout = 0.0
self.config.hidden_dropout_prob = 0.0
self.config.attention_dropout = 0.0
self.config.attention_probs_dropout_prob = 0.0
self.linear = nn.Linear(self.config.hidden_size, 1)
self.pool = MeanPooling()
def forward(self, inputs):
out = self.model(**inputs)
x = self.pool(out["last_hidden_state"], inputs["attention_mask"])
x = self.linear(x)
return x
def collate(inputs):
mask_len = int(inputs["attention_mask"].sum(axis=1).max())
for k, v in inputs.items():
inputs[k] = inputs[k][:, :mask_len]
return inputs
dataset = CustomDataset(train, CFG.max_len)
loader = DataLoader(dataset=dataset, batch_size=CFG.batch_size, shuffle=True)
model = CustomModel().to(device)
criterion = nn.BCEWithLogitsLoss(reduction="mean")
def get_optimizer_params(model, encoder_lr, decoder_lr, weight_decay=0.0):
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p
for n, p in model.model.named_parameters()
if not any(nd in n for nd in no_decay)
],
"lr": encoder_lr,
"weight_decay": weight_decay,
},
{
"params": [
p
for n, p in model.model.named_parameters()
if any(nd in n for nd in no_decay)
],
"lr": encoder_lr,
"weight_decay": 0.0,
},
{
"params": [p for n, p in model.named_parameters() if "model" not in n],
"lr": decoder_lr,
"weight_decay": 0.0,
},
]
return optimizer_parameters
# for name,param in model.named_parameters():
# if 'model' in name:
# param.param_requires_grad = False
optimizer_parameters = get_optimizer_params(
model,
encoder_lr=CFG.encoder_lr,
decoder_lr=CFG.decoder_lr,
weight_decay=CFG.weight_decay,
)
optimizer = AdamW(optimizer_parameters, lr=CFG.encoder_lr, eps=CFG.eps, betas=CFG.betas)
num_train_steps = int(len(train) / CFG.batch_size * CFG.epochs)
num_warmup_steps = num_train_steps * CFG.warmup_ratio
# Scheduler
scheduler = get_cosine_schedule_with_warmup(
optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=num_train_steps,
num_cycles=CFG.num_cycles,
)
def train_fn(model, optimizer, criterion, scheduler, loader, epochs):
torch.autograd.set_detect_anomaly(True)
for i in range(epochs):
print(f"Epoch {i}")
scaler = torch.cuda.amp.GradScaler(enabled=True)
global_step = 0
running_loss = 0.0
correct_preds = 0
total_preds = 0
for inputs, target in loader:
inputs = collate(inputs)
with torch.cuda.amp.autocast(enabled=True):
y_preds = model(inputs)
loss = criterion(y_preds.view(-1), target)
optimizer.zero_grad()
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
grad_norm = torch.nn.utils.clip_grad_norm_(
model.parameters(), CFG.max_grad_norm
)
scaler.step(optimizer)
scaler.update()
global_step += 1
running_loss += loss.item()
predicted_labels = torch.round(torch.sigmoid(y_preds)).squeeze()
correct_preds += (predicted_labels == target).sum().item()
total_preds += len(target)
scheduler.step()
if global_step % 100 == 0:
avg_loss = running_loss / 100
acc = correct_preds / total_preds
print(
f"Step {global_step}: loss = {avg_loss:.3f}, accuracy = {acc:.3f}"
)
running_loss = 0.0
correct_preds = 0
total_preds = 0
train_fn(model, optimizer, criterion, scheduler, loader, epochs=4)
dataset = CustomDataset(test, CFG.max_len, train=False)
loader = DataLoader(dataset=dataset, batch_size=CFG.batch_size, shuffle=False)
i = 0
test_preds = []
for inputs in loader:
inputs = collate(inputs)
with torch.cuda.amp.autocast(enabled=True):
ypreds = model(inputs)
ypreds = torch.round(torch.sigmoid(ypreds)).squeeze()
test_preds.extend(ypreds.tolist())
sub["id"] = test["id"]
sub["target"] = test_preds
sub["target"] = sub["target"].astype("int64")
sub.to_csv("submission.csv", index=False)
sub.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/533/129533101.ipynb
| null | null |
[{"Id": 129533101, "ScriptId": 38516179, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4611683, "CreationDate": "05/14/2023 15:38:37", "VersionNumber": 1.0, "Title": "Pytorch \ud83d\udd25 Transformer \ud83e\udd17 Simple Baseline", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 215.0, "LinesInsertedFromPrevious": 8.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 207.0, "LinesInsertedFromFork": 8.0, "LinesDeletedFromFork": 99.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 207.0, "TotalVotes": 0}]
| null | null | null | null |
# ## Imports
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import nltk
from tqdm import tqdm
from matplotlib import pyplot as plt
train = pd.read_csv("/kaggle/input/nlp-getting-started/train.csv")
test = pd.read_csv("/kaggle/input/nlp-getting-started/test.csv")
sub = pd.read_csv("/kaggle/input/nlp-getting-started/sample_submission.csv")
train.head(3)
# ## Transformer
from transformers import (
AutoModel,
AutoTokenizer,
AutoConfig,
get_cosine_schedule_with_warmup,
)
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.optim import AdamW
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class CFG:
max_len = 512
batch_size = 32
epochs = 1
model_str = "xlm-roberta-base"
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
encoder_lr = 1e-5
decoder_lr = 1e-4
eps = 1e-6
betas = (0.9, 0.999)
weight_decay = 0.01
max_grad_norm = 0.012
num_cycles = 0.5
warmup_ratio = 0.1
max_grad_norm = 0.012
def prepare_inputs(inputs, max_len):
inputs = CFG.tokenizer.encode_plus(
inputs,
return_tensors=None,
add_special_tokens=True,
max_length=max_len,
truncation=True,
pad_to_max_length=True,
)
return {k: torch.tensor(v, dtype=torch.long).to(device) for k, v in inputs.items()}
class CustomDataset:
def __init__(self, df, max_len, train=True):
self.text = df["text"]
self.train = train
if train == True:
self.target = df["target"]
self.max_len = max_len
def __len__(self):
return len(self.text)
def __getitem__(self, idx):
inputs = prepare_inputs(self.text[idx], self.max_len)
if self.train == True:
labels = torch.tensor(self.target[idx], dtype=torch.float).to(device)
return inputs, labels
else:
return inputs
class MeanPooling(nn.Module):
def __init__(self):
super(MeanPooling, self).__init__()
def forward(self, last_hidden_state, attention_mask):
input_mask_expanded = (
attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float()
)
sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
mean_embeddings = sum_embeddings / sum_mask
return mean_embeddings
class CustomModel(nn.Module):
def __init__(self):
super().__init__()
self.model = AutoModel.from_pretrained(CFG.model_str)
self.config = AutoConfig.from_pretrained(CFG.model_str)
self.config.hidden_dropout = 0.0
self.config.hidden_dropout_prob = 0.0
self.config.attention_dropout = 0.0
self.config.attention_probs_dropout_prob = 0.0
self.linear = nn.Linear(self.config.hidden_size, 1)
self.pool = MeanPooling()
def forward(self, inputs):
out = self.model(**inputs)
x = self.pool(out["last_hidden_state"], inputs["attention_mask"])
x = self.linear(x)
return x
def collate(inputs):
mask_len = int(inputs["attention_mask"].sum(axis=1).max())
for k, v in inputs.items():
inputs[k] = inputs[k][:, :mask_len]
return inputs
dataset = CustomDataset(train, CFG.max_len)
loader = DataLoader(dataset=dataset, batch_size=CFG.batch_size, shuffle=True)
model = CustomModel().to(device)
criterion = nn.BCEWithLogitsLoss(reduction="mean")
def get_optimizer_params(model, encoder_lr, decoder_lr, weight_decay=0.0):
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p
for n, p in model.model.named_parameters()
if not any(nd in n for nd in no_decay)
],
"lr": encoder_lr,
"weight_decay": weight_decay,
},
{
"params": [
p
for n, p in model.model.named_parameters()
if any(nd in n for nd in no_decay)
],
"lr": encoder_lr,
"weight_decay": 0.0,
},
{
"params": [p for n, p in model.named_parameters() if "model" not in n],
"lr": decoder_lr,
"weight_decay": 0.0,
},
]
return optimizer_parameters
# for name,param in model.named_parameters():
# if 'model' in name:
# param.param_requires_grad = False
optimizer_parameters = get_optimizer_params(
model,
encoder_lr=CFG.encoder_lr,
decoder_lr=CFG.decoder_lr,
weight_decay=CFG.weight_decay,
)
optimizer = AdamW(optimizer_parameters, lr=CFG.encoder_lr, eps=CFG.eps, betas=CFG.betas)
num_train_steps = int(len(train) / CFG.batch_size * CFG.epochs)
num_warmup_steps = num_train_steps * CFG.warmup_ratio
# Scheduler
scheduler = get_cosine_schedule_with_warmup(
optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=num_train_steps,
num_cycles=CFG.num_cycles,
)
def train_fn(model, optimizer, criterion, scheduler, loader, epochs):
torch.autograd.set_detect_anomaly(True)
for i in range(epochs):
print(f"Epoch {i}")
scaler = torch.cuda.amp.GradScaler(enabled=True)
global_step = 0
running_loss = 0.0
correct_preds = 0
total_preds = 0
for inputs, target in loader:
inputs = collate(inputs)
with torch.cuda.amp.autocast(enabled=True):
y_preds = model(inputs)
loss = criterion(y_preds.view(-1), target)
optimizer.zero_grad()
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
grad_norm = torch.nn.utils.clip_grad_norm_(
model.parameters(), CFG.max_grad_norm
)
scaler.step(optimizer)
scaler.update()
global_step += 1
running_loss += loss.item()
predicted_labels = torch.round(torch.sigmoid(y_preds)).squeeze()
correct_preds += (predicted_labels == target).sum().item()
total_preds += len(target)
scheduler.step()
if global_step % 100 == 0:
avg_loss = running_loss / 100
acc = correct_preds / total_preds
print(
f"Step {global_step}: loss = {avg_loss:.3f}, accuracy = {acc:.3f}"
)
running_loss = 0.0
correct_preds = 0
total_preds = 0
train_fn(model, optimizer, criterion, scheduler, loader, epochs=4)
dataset = CustomDataset(test, CFG.max_len, train=False)
loader = DataLoader(dataset=dataset, batch_size=CFG.batch_size, shuffle=False)
i = 0
test_preds = []
for inputs in loader:
inputs = collate(inputs)
with torch.cuda.amp.autocast(enabled=True):
ypreds = model(inputs)
ypreds = torch.round(torch.sigmoid(ypreds)).squeeze()
test_preds.extend(ypreds.tolist())
sub["id"] = test["id"]
sub["target"] = test_preds
sub["target"] = sub["target"].astype("int64")
sub.to_csv("submission.csv", index=False)
sub.head()
| false | 0 | 2,122 | 0 | 2,122 | 2,122 |
||
129969666
|
<jupyter_start><jupyter_text>Credit Card Fraud Detection
Context
---------
It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.
Content
---------
The dataset contains transactions made by credit cards in September 2013 by European cardholders.
This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.
Update (03/05/2021)
---------
A simulator for transaction data has been released as part of the practical handbook on Machine Learning for Credit Card Fraud Detection - https://fraud-detection-handbook.github.io/fraud-detection-handbook/Chapter_3_GettingStarted/SimulatedDataset.html. We invite all practitioners interested in fraud detection datasets to also check out this data simulator, and the methodologies for credit card fraud detection presented in the book.
Acknowledgements
---------
The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection.
More details on current and past projects on related topics are available on [https://www.researchgate.net/project/Fraud-detection-5][1] and the page of the [DefeatFraud][2] project
Please cite the following works:
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. [Calibrating Probability with Undersampling for Unbalanced Classification.][3] In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. [Learned lessons in credit card fraud detection from a practitioner perspective][4], Expert systems with applications,41,10,4915-4928,2014, Pergamon
Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. [Credit card fraud detection: a realistic modeling and a novel learning strategy,][5] IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE
Dal Pozzolo, Andrea [Adaptive Machine learning for credit card fraud detection][6] ULB MLG PhD thesis (supervised by G. Bontempi)
Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-Aël; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. [Scarff: a scalable framework for streaming credit card fraud detection with Spark][7], Information fusion,41, 182-194,2018,Elsevier
Carcillo, Fabrizio; Le Borgne, Yann-Aël; Caelen, Olivier; Bontempi, Gianluca. [Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization,][8] International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing
Bertrand Lebichot, Yann-Aël Le Borgne, Liyun He, Frederic Oblé, Gianluca Bontempi [Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection](https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection), INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019
Fabrizio Carcillo, Yann-Aël Le Borgne, Olivier Caelen, Frederic Oblé, Gianluca Bontempi [Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection ](https://www.researchgate.net/publication/333143698_Combining_Unsupervised_and_Supervised_Learning_in_Credit_Card_Fraud_Detection) Information Sciences, 2019
Yann-Aël Le Borgne, Gianluca Bontempi [Reproducible machine Learning for Credit Card Fraud Detection - Practical Handbook ](https://www.researchgate.net/publication/351283764_Machine_Learning_for_Credit_Card_Fraud_Detection_-_Practical_Handbook)
Bertrand Lebichot, Gianmarco Paldino, Wissam Siblini, Liyun He, Frederic Oblé, Gianluca Bontempi [Incremental learning strategies for credit cards fraud detection](https://www.researchgate.net/publication/352275169_Incremental_learning_strategies_for_credit_cards_fraud_detection), IInternational Journal of Data Science and Analytics
[1]: https://www.researchgate.net/project/Fraud-detection-5
[2]: https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/
[3]: https://www.researchgate.net/publication/283349138_Calibrating_Probability_with_Undersampling_for_Unbalanced_Classification
[4]: https://www.researchgate.net/publication/260837261_Learned_lessons_in_credit_card_fraud_detection_from_a_practitioner_perspective
[5]: https://www.researchgate.net/publication/319867396_Credit_Card_Fraud_Detection_A_Realistic_Modeling_and_a_Novel_Learning_Strategy
[6]: http://di.ulb.ac.be/map/adalpozz/pdf/Dalpozzolo2015PhD.pdf
[7]: https://www.researchgate.net/publication/319616537_SCARFF_a_Scalable_Framework_for_Streaming_Credit_Card_Fraud_Detection_with_Spark
[8]: https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection
Kaggle dataset identifier: creditcardfraud
<jupyter_script># # **Librerías**
import io
import scipy as sp
import numpy as np # álgebra lineal
import pandas as pd # procesamiento de datos
import matplotlib.pyplot as plt # gáficos básicos
import seaborn as sns # gráficos avanzados
from sklearn.model_selection import train_test_split # división de datos
from sklearn.preprocessing import StandardScaler # normalización de datos
from sklearn.neural_network import MLPClassifier # modelo Artificial Neural Network
from sklearn.tree import DecisionTreeClassifier # modelo Decision Tree
from sklearn.ensemble import GradientBoostingClassifier # modelo Gradient Boosting
from sklearn.ensemble import IsolationForest # modelo Isolation Forest
from sklearn.neighbors import KNeighborsClassifier # modelo K-Nearest Neighbors
from sklearn.linear_model import LogisticRegression # modelo Logistic Regression
from sklearn.naive_bayes import GaussianNB # modelo Naïve Baiyes Classifier
from sklearn.ensemble import RandomForestClassifier # modelo Random Forest Classifier
from sklearn.svm import SVC # modelo Support Vector Machine
from sklearn.metrics import (
classification_report,
confusion_matrix,
accuracy_score,
precision_score,
recall_score,
f1_score,
roc_auc_score,
roc_curve,
make_scorer,
) # evaluación de métricas
from sklearn.pipeline import Pipeline # sikit learn pipline
from sklearn.pipeline import make_pipeline # sikit learn pipline
from sklearn.model_selection import GridSearchCV, ShuffleSplit # cross validation
from sklearn import model_selection, linear_model, decomposition
from scipy.stats import uniform as sp_randFloat
from scipy.stats import randint as sp_randInt
from sklearn.tree import plot_tree
from sklearn import tree
from sklearn import metrics
# model = StreamingRFC(spf_n_fits=math.inf)
from incremental_trees.models.classification.streaming_rfc import StreamingRFC
# from imblearn.over_sampling import SMOTE
from imblearn.over_sampling._smote.base import SMOTE
from collections import Counter
# # **Limpieza / Preprocesamiento y Transformación de los datos**
from google.colab import files
uploaded = files.upload()
df = pd.read_csv(io.BytesIO(uploaded["creditcard.csv"]))
df.head()
df.plot()
plt.show()
from pandas.plotting import andrews_curves
plt.figure()
andrews_curves(df, "Class")
df.plot(subplots=True, layout=(31, 31), figsize=(31, 31), sharex=False)
# Se busca una información más general del conjunto de datos.
df.info()
# Tratamiento de los datos **(Preprocesamiento)**:
# visualizar si existen datos duplicados
df[df.duplicated() == True]
# eliminar las filas duplicadas
df1 = df.drop_duplicates()
# re-check: visualizar si existen datos duplicados
df1[df1.duplicated() == True]
# visualizar si existen valores nulos
nulls = df.isna().sum() # contar valores nulos en cada columna
df_nulls = pd.DataFrame(nulls) # convertir el resultado en un dataframe
df_nulls.transpose() # transponer el marco de datos e imprimir el resultado
# visualizar si existen valores atípicos
int_vars = df1[
[
"V1",
"V2",
"V3",
"V4",
"V5",
"V6",
"V7",
"V8",
"V9",
"V10",
"V11",
"V12",
"V13",
"V14",
"V15",
"V16",
"V17",
"V18",
"V19",
"V20",
"V21",
"V22",
"V23",
"V24",
"V25",
"V26",
"V27",
"V28",
"Amount",
"Class",
]
]
sns.pairplot(int_vars, hue="Class")
plt.show()
df1
# # **Resultado**
# » Los datos no tienen valores nulos.
# »Todas las características están en el tipo correcto.
# »Se descartaron las filas duplicadas.
# » Se encontraron valores atípicos los cuales son el objetivo de este proyecto.
# Ahora los datos están libres de errores y listos para construir el pipeline.
df1.to_csv("creditcard.csv")
from google.colab import files
files.download("creditcard.csv")
# # **Ajuste de parámetros con GridSearchCV:**
# Dividimos los datos en entrenamiento y prueba
df_training = df1.head(int(len(df1) * 0.8))
y_train = df_training["Class"]
X_train = df_training.drop("Class", axis=1)
df_test = df.drop(df_training.index)
y_test = df_test["Class"]
X_test = df_test.drop("Class", axis=1)
print("Ejemplos usados para entrenar: ", len(X_train))
print("Ejemplos usados para test: ", len(X_test))
# Mostramos los datos de la columna 'Class' para el conjunto de prueba
y_test.value_counts()
# # **Manejando los datos desbalanceados:**
# SMOTE: Synthetic Minority Oversampling Technique
counter = Counter(y_train)
print("Antes", counter)
# oversampling the train dataset using SMOTE
smt = SMOTE(random_state=0)
# X_train, y_train=smt.fit_resample(X_train, y_train)
X_train_sm, y_train_sm = smt.fit_resample(X_train, y_train)
counter = Counter(y_train_sm)
print("Después", counter)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo K-Nearest Neighbors.**
# Iniciamos el modelo
knn = KNeighborsClassifier()
# operaciones en orden
operations = [("knn", knn)] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador knn
knn.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# modificaremos el 'n_neighbors'
k_values = list(range(1, 100))
k_values
# establecer el parámetro del grid
param_grid = {
"knn__n_neighbors": k_values
} # podemos añadir cualquier otro parámetro (to be tuned)
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
pipe, param_grid, cv=5, scoring=scoring, refit="sensitivity"
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a KNeighborsClassifier(n_neighbors=1)
# iniciar y configurar las operaciones
knn2 = KNeighborsClassifier(n_neighbors=1)
operations = [("knn2", knn2)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
knn_auc = roc_auc_score(y_test, knn2.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, knn2.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, knn2.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, knn2.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, knn2.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(knn_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = knn2.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Decision Trees Classifier.**
# Iniciamos el modelo
dtree_model = DecisionTreeClassifier(random_state=0)
# operaciones en orden
operations = [
("dtree_model", dtree_model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador DT
dtree_model.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
param_grid = {"criterion": ["gini", "entropy"], "max_depth": range(1, 150, 1)}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
dtree_model, param_grid, cv=5, scoring=scoring, refit="sensitivity"
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a DecisionTreeClassifier(criterion='entropy', max_depth=23, random_state=0)
# iniciar y configurar las operaciones
dtree_model4 = DecisionTreeClassifier(criterion="entropy", max_depth=23, random_state=0)
operations = [("dtree_model4", dtree_model4)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
dt_auc = roc_auc_score(y_test, dtree_model4.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, dtree_model4.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, dtree_model4.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, dtree_model4.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, dtree_model4.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(dt_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = dtree_model4.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# plot model
dt = DecisionTreeClassifier(criterion="entropy", max_depth=23, random_state=0)
dt.fit(X_train_sm, y_train_sm)
fig = plt.figure(figsize=(100, 20))
_ = tree.plot_tree(dt)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Naïve Baiyes Classifier.**
# Iniciamos el modelo
nb_classifier = GaussianNB()
# operaciones en orden
operations = [
("nb_classifier", nb_classifier)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador NBC
nb_classifier.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
params_NB = {"var_smoothing": np.logspace(0, -9, num=100)}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
estimator=nb_classifier,
param_grid=params_NB,
cv=5,
scoring=scoring,
refit="sensitivity",
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a GaussianNB(var_smoothing=5.3366992312063123e-05)
# iniciar y configurar las operaciones
nb_classifier8 = GaussianNB(var_smoothing=5.3366992312063123e-05)
operations = [("nb_classifier8", nb_classifier8)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
nbc_auc = roc_auc_score(y_test, nb_classifier8.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, nb_classifier8.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, nb_classifier8.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, nb_classifier8.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, nb_classifier8.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(nbc_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = nb_classifier8.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Support Vector Machine.**
# Iniciamos el modelo
model = SVC(random_state=0)
# operaciones en orden
operations = [
("model", model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador SVM
model.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
params_SVM = {
"C": [0.1, 1, 10, 100, 1000],
"gamma": [1, 0.1, 0.01, 0.001, 0.0001],
"kernel": ["rbf"],
}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
estimator=model,
param_grid=params_SVM,
cv=5,
scoring=scoring,
refit="sensitivity",
verbose=3,
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a SVC(C=1000, gamma=0.0001, random_state=0)
#
# iniciar y configurar las operaciones
# Para que este método funcione debemos modificar el parámetro 'probability' a probability=True
modelCG2 = SVC(C=1000, gamma=0.0001, random_state=0, probability=True)
operations = [("modelCG2", modelCG2)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
svm_auc = roc_auc_score(y_test, modelCG2.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelCG2.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, modelCG2.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, modelCG2.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, modelCG2.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(svm_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = modelCG2.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Logistic Regression.**
# PCA()
pca = decomposition.PCA()
# Iniciamos el modelo
logistic_Reg = linear_model.LogisticRegression(random_state=0)
# operaciones en orden
operations = [
("pca", pca),
("logistic_Reg", logistic_Reg),
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador LR
logistic_Reg.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# modificaremos el 'n_components '
n_components = list(range(1, X_train.shape[1] + 1, 1))
n_components
# La regresión logística requiere que GridSearchCV optimice dos parámetros 'C' y 'penalty'.
# Así que hemos establecido estos dos parámetros como una lista de valores de los cuales GridSearchCV seleccionará el mejor valor del parámetro.
C = np.logspace(-4, 4, 50)
penalty = ["l1", "l2"]
# Ahora estamos creando un diccionario para establecer todas las opciones de parámetros para diferentes módulos.
parameters = dict(
pca__n_components=n_components, logistic_Reg__C=C, logistic_Reg__penalty=penalty
)
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Hacer un objeto clf_GS para GridSearchCV y ajustar el conjunto de datos, es decir, X e y
clf_GS = GridSearchCV(pipe, parameters, cv=5, scoring=scoring, refit="sensitivity")
clf_GS.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
clf_GS.best_estimator_.get_params()
clf_GS.best_estimator_
# El mejor rendimiento está asociado a LogisticRegression(C=109.85411419875572, random_state=0)
# iniciar y configurar las operaciones
logistic_RegC2 = linear_model.LogisticRegression(C=109.85411419875572, random_state=0)
operations = [("logistic_RegC2", logistic_RegC2)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
lr_auc = roc_auc_score(y_test, logistic_RegC2.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, logistic_RegC2.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, logistic_RegC2.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, logistic_RegC2.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, logistic_RegC2.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(lr_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = logistic_RegC2.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Artificial Neural Network.**
# Iniciamos el modelo
mlp = MLPClassifier(max_iter=100, random_state=0)
# operaciones en orden
operations = [("mlp", mlp)] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador LR
mlp.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predºict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
parameter_space = {
"hidden_layer_sizes": [(50, 50, 50), (50, 100, 50), (100,)],
"activation": ["tanh", "relu"],
"solver": ["sgd", "adam"],
"alpha": [0.0001, 0.05],
"learning_rate": ["constant", "adaptive"],
}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
mlp, parameter_space, cv=3, scoring=scoring, refit="sensitivity", verbose=3
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a MLPClassifier(hidden_layer_sizes=(50, 50, 50), learning_rate='adaptive', max_iter=100, random_state=0, solver='sgd')
# iniciar y configurar las operaciones
mlpC = MLPClassifier(
hidden_layer_sizes=(50, 50, 50),
learning_rate="adaptive",
max_iter=100,
random_state=0,
solver="sgd",
)
operations = [("mlpC", mlpC)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
mlp_auc = roc_auc_score(y_test, mlpC.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, mlpC.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, mlpC.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, mlpC.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, mlpC.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(mlp_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = mlpC.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Random Forest.**
# Iniciamos el modelo
model = StreamingRFC(random_state=0)
# operaciones en orden
operations = [
("model", model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador RFC
model.get_params().keys()
# entrenamiento del pipeline
import warnings
warnings.filterwarnings("ignore")
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# establecer el parámetro del grid
param_grid = {
"max_depth": range(1, 150, 1),
"min_samples_leaf": [0, 0.025, 0.05, 0.075, 0.1],
"max_features": ["sqrt", "log2"],
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
estimator=model,
param_grid=param_grid,
cv=5,
scoring=scoring,
refit="sensitivity",
verbose=3,
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a StreamingRFC(max_depth=4, max_features='log2', min_samples_leaf=0.025, random_state=0)
#
# iniciar y configurar las operaciones
modelRFC = StreamingRFC(
max_depth=4, max_features="log2", min_samples_leaf=0.025, random_state=0
)
operations = [("modelRFC", modelRFC)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
rfc_auc = roc_auc_score(y_test, modelRFC.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelRFC.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, modelRFC.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, modelRFC.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, modelRFC.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(rfc_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = modelRFC.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# plot model
rf = RandomForestClassifier(
max_depth=4, max_features="log2", min_samples_leaf=0.025, random_state=0
)
rf.fit(X_train_sm, y_train_sm)
plt.figure(figsize=(20, 20))
_ = tree.plot_tree(rf.estimators_[0], feature_names=X_train_sm.columns, filled=True)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Isolation Forest**
# Iniciamos el modelo
model_isf = IsolationForest(random_state=0)
# operaciones en orden
operations = [
("model_isf", model_isf)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador RFC
model_isf.get_params().keys()
# entrenamiento del pipeline
import warnings
warnings.filterwarnings("ignore")
model_isf.fit(X_train_sm, y_train_sm)
y_pred = pd.Series(model_isf.predict(X_test))
y_pred = y_pred.map({1: 0, -1: 1})
tn, fp, fn, tp = confusion_matrix(y_test.round(), y_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(
confusion_matrix(y_test.round(), y_pred), annot=True, cmap="Greys", fmt=".0f"
)
plt.show()
# establecer el parámetro del grid
param_grid = {"n_estimators": range(10, 50, 10), "max_features": range(8, 28, 10)}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
import warnings
warnings.filterwarnings("ignore")
full_cv_classifier = GridSearchCV(
model_isf, param_grid, cv=5, scoring=scoring, refit="sensitivity", verbose=3
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a IsolationForest(max_features=8, n_estimators=10, random_state=0)
# iniciar y configurar las operaciones
modelIF = IsolationForest(max_features=8, n_estimators=10, random_state=0)
operations = [("modelIF", modelIF)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
y_pred = pd.Series(modelIF.predict(X_test))
y_pred = y_pred.map({1: 0, -1: 1})
import warnings
warnings.filterwarnings("ignore")
if_auc = roc_auc_score(y_test, pipe.predict(X_test))
tn, fp, fn, tp = confusion_matrix(y_test.round(), y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelIF.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(if_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), y_pred, pos_label=0)
roc_auc = metrics.auc(fpr, tpr)
# Graficar roc_curve
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# Imprimir valor del AUC
if_auc = np.trapz(tpr, fpr)
print("AUC:", if_auc)
# plot model
iforst = IsolationForest(max_features=8, n_estimators=10, random_state=0)
iforst.fit(X_train_sm, y_train_sm)
plt.figure(figsize=(20, 20))
_ = tree.plot_tree(iforst.estimators_[0], feature_names=X_train_sm.columns, filled=True)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Gradient Boosting**
# Iniciamos el modelo
model = GradientBoostingClassifier(random_state=0)
# operaciones en orden
operations = [
("model", model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador GBC
model.get_params().keys()
# entrenamiento del pipeline
import warnings
warnings.filterwarnings("ignore")
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
parameters = {
"learning_rate": [0.01, 0.05, 0.1, 0.5, 1],
"min_samples_split": [2, 5, 10, 20],
"max_depth": [2, 3, 5, 10],
}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
model, parameters, cv=3, scoring=scoring, refit="sensitivity", verbose=3
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm.values.ravel())
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a GradientBoostingClassifier(learning_rate=1, max_depth=10, min_samples_split=10, random_state=0)
# iniciar y configurar las operaciones
modelGBC = GradientBoostingClassifier(
learning_rate=1, max_depth=10, min_samples_split=10, random_state=0
)
operations = [("modelGBC", modelGBC)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
gbc_auc = roc_auc_score(y_test, modelGBC.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelGBC.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, modelGBC.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, modelGBC.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, modelGBC.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(gbc_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = modelGBC.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Elección del método de ML.**
mlp_f1 = 0.008793356130923304
dt_f1 = 0.2650602409638554
gbc_f1 = 0.2608695652173913
if_f1 = 0.05555555555555556
knn_f1 = 0.0
lr_f1 = 0.05520169851380042
nbc_f1 = 0.008479067302596715
rfc_f1 = 0.014204545454545456
svm_f1 = 0.0
mylist = [mlp_f1, dt_f1, gbc_f1, if_f1, knn_f1, lr_f1, nbc_f1, rfc_f1, svm_f1]
best_f1 = 0.0
for x in mylist:
a = x
if a > best_f1:
best_f1 = a
print("El mayor valor de F1 Score está dado para el modelo:")
if best_f1 == mlp_f1:
print("Artificial Neural Network")
if best_f1 == dt_f1:
print("Decision Tree Classifier")
if best_f1 == gbc_f1:
print("Gradient Boosting")
if best_f1 == if_f1:
print("Isolation Forest")
if best_f1 == knn_f1:
print("K-Nearest Neighbors")
if best_f1 == lr_f1:
print("Logistic Regression")
if best_f1 == nbc_f1:
print("Naïve Baiyes Classifier")
if best_f1 == rfc_f1:
print("Random Forest Classifier")
if best_f1 == svm_f1:
print("Support Vector Machine")
# # **Experimento 1: Comparación entre los resultados del Dataset original.**
df1 = df
# Dividimos los datos en entrenamiento y prueba
df_training = df1.head(int(len(df1) * 0.8))
y_train = df_training["Class"]
X_train = df_training.drop("Class", axis=1)
df_test = df.drop(df_training.index)
y_test = df_test["Class"]
X_test = df_test.drop("Class", axis=1)
print("Ejemplos usados para entrenar: ", len(X_train))
print("Ejemplos usados para test: ", len(X_test))
# Mostramos los datos de la columna 'Class' para el conjunto de prueba
y_test.value_counts()
# **Artificial Neural Network**
model = MLPClassifier(max_iter=100, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Decision Tree Classifier**
#
model = DecisionTreeClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Gradient Boosting**
model = GradientBoostingClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Isolation Forest**
model = IsolationForest(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
import warnings
warnings.filterwarnings("ignore")
p_pred = pipe.predict(X_test)
p_pred = p_pred.flatten()
y_pred = np.where(p_pred > 0.5, 1, 0)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), p_pred, pos_label=0)
auc = np.trapz(tpr, fpr)
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
print(
"(accuracy_score) = {}".format(
accuracy_score(y_test.round(), model.predict(X_test))
)
)
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **K-Nearest Neighbors**
model = KNeighborsClassifier()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Logistic Regression**
model = linear_model.LogisticRegression(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Naïve Baiyes Classifier**
model = GaussianNB()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Random Forest Classifier**
model = StreamingRFC(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Support Vector Machine**
model = SVC(probability=True, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# # **Experimento 2: Comparación entre los modelos de ML teniendo en cuenta los resultados de los pasos 3 y 4 de la secuencia KDD (sin tratamiento del desbalance).**
df1 = df.drop_duplicates()
# Dividimos los datos en entrenamiento y prueba
df_training = df1.head(int(len(df1) * 0.8))
y_train = df_training["Class"]
X_train = df_training.drop("Class", axis=1)
df_test = df.drop(df_training.index)
y_test = df_test["Class"]
X_test = df_test.drop("Class", axis=1)
print("Ejemplos usados para entrenar: ", len(X_train))
print("Ejemplos usados para test: ", len(X_test))
# Mostramos los datos de la columna 'Class' para el conjunto de prueba
y_test.value_counts()
# **Artificial Neural Network**
model = MLPClassifier(max_iter=100, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Decision Tree Classifier**
model = DecisionTreeClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Gradient Boosting**
model = GradientBoostingClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Isolation Forest**
model = IsolationForest(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
import warnings
warnings.filterwarnings("ignore")
p_pred = pipe.predict(X_test)
p_pred = p_pred.flatten()
y_pred = np.where(p_pred > 0.5, 1, 0)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), p_pred, pos_label=0)
auc = np.trapz(tpr, fpr)
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
print(
"(accuracy_score) = {}".format(
accuracy_score(y_test.round(), model.predict(X_test))
)
)
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **K-Nearest Neighbors**
model = KNeighborsClassifier()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Logistic Regression**
model = linear_model.LogisticRegression(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Naïve Baiyes Classifier**
model = GaussianNB()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Random Forest Classifier**
model = StreamingRFC(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Support Vector Machine**
model = SVC(probability=True, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# # **Experimento 3: Comparación entre los modelos de ML tras aplicar SMOTE.**
# SMOTE: Synthetic Minority Oversampling Technique
counter = Counter(y_train)
print("Antes", counter)
# oversampling the train dataset using SMOTE
smt = SMOTE(random_state=0)
# X_train, y_train=smt.fit_resample(X_train, y_train)
X_train_sm, y_train_sm = smt.fit_resample(X_train, y_train)
counter = Counter(y_train_sm)
print("Después", counter)
# **Artificial Neural Network**
model = MLPClassifier(max_iter=100, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Decision Tree Classifier**
model = DecisionTreeClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Gradient Boosting**
model = GradientBoostingClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Isolation Forest**
model = IsolationForest(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
import warnings
warnings.filterwarnings("ignore")
p_pred = pipe.predict(X_test)
p_pred = p_pred.flatten()
y_pred = np.where(p_pred > 0.5, 1, 0)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), p_pred, pos_label=0)
auc = np.trapz(tpr, fpr)
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
print(
"(accuracy_score) = {}".format(
accuracy_score(y_test.round(), model.predict(X_test))
)
)
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **K-Nearest Neighbors**
model = KNeighborsClassifier()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Logistic Regression**
model = linear_model.LogisticRegression(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Naïve Baiyes Classifier**
model = GaussianNB()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Random Forest Classifier**
model = StreamingRFC(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Support Vector Machine**
model = SVC(probability=True, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/969/129969666.ipynb
|
creditcardfraud
| null |
[{"Id": 129969666, "ScriptId": 38660696, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9864294, "CreationDate": "05/17/2023 19:54:56", "VersionNumber": 2.0, "Title": "notebookea7a3e004d", "EvaluationDate": NaN, "IsChange": false, "TotalLines": 2157.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 2157.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186408710, "KernelVersionId": 129969666, "SourceDatasetVersionId": 23498}]
|
[{"Id": 23498, "DatasetId": 310, "DatasourceVersionId": 23502, "CreatorUserId": 998023, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "03/23/2018 01:17:27", "VersionNumber": 3.0, "Title": "Credit Card Fraud Detection", "Slug": "creditcardfraud", "Subtitle": "Anonymized credit card transactions labeled as fraudulent or genuine", "Description": "Context\n---------\n\nIt is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.\n\nContent\n---------\n\nThe dataset contains transactions made by credit cards in September 2013 by European cardholders. \nThis dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.\n\nIt contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise. \n\nGiven the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.\n\nUpdate (03/05/2021)\n---------\n\nA simulator for transaction data has been released as part of the practical handbook on Machine Learning for Credit Card Fraud Detection - https://fraud-detection-handbook.github.io/fraud-detection-handbook/Chapter_3_GettingStarted/SimulatedDataset.html. We invite all practitioners interested in fraud detection datasets to also check out this data simulator, and the methodologies for credit card fraud detection presented in the book.\n\nAcknowledgements\n---------\n\nThe dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00e9 Libre de Bruxelles) on big data mining and fraud detection.\nMore details on current and past projects on related topics are available on [https://www.researchgate.net/project/Fraud-detection-5][1] and the page of the [DefeatFraud][2] project\n\nPlease cite the following works: \n\nAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. [Calibrating Probability with Undersampling for Unbalanced Classification.][3] In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n\nDal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. [Learned lessons in credit card fraud detection from a practitioner perspective][4], Expert systems with applications,41,10,4915-4928,2014, Pergamon\n\nDal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. [Credit card fraud detection: a realistic modeling and a novel learning strategy,][5] IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n\nDal Pozzolo, Andrea [Adaptive Machine learning for credit card fraud detection][6] ULB MLG PhD thesis (supervised by G. Bontempi)\n\nCarcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00ebl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. [Scarff: a scalable framework for streaming credit card fraud detection with Spark][7], Information fusion,41, 182-194,2018,Elsevier\n\nCarcillo, Fabrizio; Le Borgne, Yann-A\u00ebl; Caelen, Olivier; Bontempi, Gianluca. [Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization,][8] International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing\n\nBertrand Lebichot, Yann-A\u00ebl Le Borgne, Liyun He, Frederic Obl\u00e9, Gianluca Bontempi [Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection](https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection), INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019\n\nFabrizio Carcillo, Yann-A\u00ebl Le Borgne, Olivier Caelen, Frederic Obl\u00e9, Gianluca Bontempi [Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection ](https://www.researchgate.net/publication/333143698_Combining_Unsupervised_and_Supervised_Learning_in_Credit_Card_Fraud_Detection) Information Sciences, 2019\n\nYann-A\u00ebl Le Borgne, Gianluca Bontempi [Reproducible machine Learning for Credit Card Fraud Detection - Practical Handbook ](https://www.researchgate.net/publication/351283764_Machine_Learning_for_Credit_Card_Fraud_Detection_-_Practical_Handbook) \n\nBertrand Lebichot, Gianmarco Paldino, Wissam Siblini, Liyun He, Frederic Obl\u00e9, Gianluca Bontempi [Incremental learning strategies for credit cards fraud detection](https://www.researchgate.net/publication/352275169_Incremental_learning_strategies_for_credit_cards_fraud_detection), IInternational Journal of Data Science and Analytics\n\n [1]: https://www.researchgate.net/project/Fraud-detection-5\n [2]: https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/\n [3]: https://www.researchgate.net/publication/283349138_Calibrating_Probability_with_Undersampling_for_Unbalanced_Classification\n [4]: https://www.researchgate.net/publication/260837261_Learned_lessons_in_credit_card_fraud_detection_from_a_practitioner_perspective\n [5]: https://www.researchgate.net/publication/319867396_Credit_Card_Fraud_Detection_A_Realistic_Modeling_and_a_Novel_Learning_Strategy\n [6]: http://di.ulb.ac.be/map/adalpozz/pdf/Dalpozzolo2015PhD.pdf\n [7]: https://www.researchgate.net/publication/319616537_SCARFF_a_Scalable_Framework_for_Streaming_Credit_Card_Fraud_Detection_with_Spark\n \n[8]: https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection", "VersionNotes": "Fixed preview", "TotalCompressedBytes": 150828752.0, "TotalUncompressedBytes": 69155632.0}]
|
[{"Id": 310, "CreatorUserId": 14069, "OwnerUserId": NaN, "OwnerOrganizationId": 1160.0, "CurrentDatasetVersionId": 23498.0, "CurrentDatasourceVersionId": 23502.0, "ForumId": 1838, "Type": 2, "CreationDate": "11/03/2016 13:21:36", "LastActivityDate": "02/06/2018", "TotalViews": 10310781, "TotalDownloads": 564249, "TotalVotes": 10432, "TotalKernels": 4266}]
| null |
# # **Librerías**
import io
import scipy as sp
import numpy as np # álgebra lineal
import pandas as pd # procesamiento de datos
import matplotlib.pyplot as plt # gáficos básicos
import seaborn as sns # gráficos avanzados
from sklearn.model_selection import train_test_split # división de datos
from sklearn.preprocessing import StandardScaler # normalización de datos
from sklearn.neural_network import MLPClassifier # modelo Artificial Neural Network
from sklearn.tree import DecisionTreeClassifier # modelo Decision Tree
from sklearn.ensemble import GradientBoostingClassifier # modelo Gradient Boosting
from sklearn.ensemble import IsolationForest # modelo Isolation Forest
from sklearn.neighbors import KNeighborsClassifier # modelo K-Nearest Neighbors
from sklearn.linear_model import LogisticRegression # modelo Logistic Regression
from sklearn.naive_bayes import GaussianNB # modelo Naïve Baiyes Classifier
from sklearn.ensemble import RandomForestClassifier # modelo Random Forest Classifier
from sklearn.svm import SVC # modelo Support Vector Machine
from sklearn.metrics import (
classification_report,
confusion_matrix,
accuracy_score,
precision_score,
recall_score,
f1_score,
roc_auc_score,
roc_curve,
make_scorer,
) # evaluación de métricas
from sklearn.pipeline import Pipeline # sikit learn pipline
from sklearn.pipeline import make_pipeline # sikit learn pipline
from sklearn.model_selection import GridSearchCV, ShuffleSplit # cross validation
from sklearn import model_selection, linear_model, decomposition
from scipy.stats import uniform as sp_randFloat
from scipy.stats import randint as sp_randInt
from sklearn.tree import plot_tree
from sklearn import tree
from sklearn import metrics
# model = StreamingRFC(spf_n_fits=math.inf)
from incremental_trees.models.classification.streaming_rfc import StreamingRFC
# from imblearn.over_sampling import SMOTE
from imblearn.over_sampling._smote.base import SMOTE
from collections import Counter
# # **Limpieza / Preprocesamiento y Transformación de los datos**
from google.colab import files
uploaded = files.upload()
df = pd.read_csv(io.BytesIO(uploaded["creditcard.csv"]))
df.head()
df.plot()
plt.show()
from pandas.plotting import andrews_curves
plt.figure()
andrews_curves(df, "Class")
df.plot(subplots=True, layout=(31, 31), figsize=(31, 31), sharex=False)
# Se busca una información más general del conjunto de datos.
df.info()
# Tratamiento de los datos **(Preprocesamiento)**:
# visualizar si existen datos duplicados
df[df.duplicated() == True]
# eliminar las filas duplicadas
df1 = df.drop_duplicates()
# re-check: visualizar si existen datos duplicados
df1[df1.duplicated() == True]
# visualizar si existen valores nulos
nulls = df.isna().sum() # contar valores nulos en cada columna
df_nulls = pd.DataFrame(nulls) # convertir el resultado en un dataframe
df_nulls.transpose() # transponer el marco de datos e imprimir el resultado
# visualizar si existen valores atípicos
int_vars = df1[
[
"V1",
"V2",
"V3",
"V4",
"V5",
"V6",
"V7",
"V8",
"V9",
"V10",
"V11",
"V12",
"V13",
"V14",
"V15",
"V16",
"V17",
"V18",
"V19",
"V20",
"V21",
"V22",
"V23",
"V24",
"V25",
"V26",
"V27",
"V28",
"Amount",
"Class",
]
]
sns.pairplot(int_vars, hue="Class")
plt.show()
df1
# # **Resultado**
# » Los datos no tienen valores nulos.
# »Todas las características están en el tipo correcto.
# »Se descartaron las filas duplicadas.
# » Se encontraron valores atípicos los cuales son el objetivo de este proyecto.
# Ahora los datos están libres de errores y listos para construir el pipeline.
df1.to_csv("creditcard.csv")
from google.colab import files
files.download("creditcard.csv")
# # **Ajuste de parámetros con GridSearchCV:**
# Dividimos los datos en entrenamiento y prueba
df_training = df1.head(int(len(df1) * 0.8))
y_train = df_training["Class"]
X_train = df_training.drop("Class", axis=1)
df_test = df.drop(df_training.index)
y_test = df_test["Class"]
X_test = df_test.drop("Class", axis=1)
print("Ejemplos usados para entrenar: ", len(X_train))
print("Ejemplos usados para test: ", len(X_test))
# Mostramos los datos de la columna 'Class' para el conjunto de prueba
y_test.value_counts()
# # **Manejando los datos desbalanceados:**
# SMOTE: Synthetic Minority Oversampling Technique
counter = Counter(y_train)
print("Antes", counter)
# oversampling the train dataset using SMOTE
smt = SMOTE(random_state=0)
# X_train, y_train=smt.fit_resample(X_train, y_train)
X_train_sm, y_train_sm = smt.fit_resample(X_train, y_train)
counter = Counter(y_train_sm)
print("Después", counter)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo K-Nearest Neighbors.**
# Iniciamos el modelo
knn = KNeighborsClassifier()
# operaciones en orden
operations = [("knn", knn)] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador knn
knn.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# modificaremos el 'n_neighbors'
k_values = list(range(1, 100))
k_values
# establecer el parámetro del grid
param_grid = {
"knn__n_neighbors": k_values
} # podemos añadir cualquier otro parámetro (to be tuned)
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
pipe, param_grid, cv=5, scoring=scoring, refit="sensitivity"
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a KNeighborsClassifier(n_neighbors=1)
# iniciar y configurar las operaciones
knn2 = KNeighborsClassifier(n_neighbors=1)
operations = [("knn2", knn2)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
knn_auc = roc_auc_score(y_test, knn2.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, knn2.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, knn2.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, knn2.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, knn2.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(knn_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = knn2.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Decision Trees Classifier.**
# Iniciamos el modelo
dtree_model = DecisionTreeClassifier(random_state=0)
# operaciones en orden
operations = [
("dtree_model", dtree_model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador DT
dtree_model.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
param_grid = {"criterion": ["gini", "entropy"], "max_depth": range(1, 150, 1)}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
dtree_model, param_grid, cv=5, scoring=scoring, refit="sensitivity"
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a DecisionTreeClassifier(criterion='entropy', max_depth=23, random_state=0)
# iniciar y configurar las operaciones
dtree_model4 = DecisionTreeClassifier(criterion="entropy", max_depth=23, random_state=0)
operations = [("dtree_model4", dtree_model4)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
dt_auc = roc_auc_score(y_test, dtree_model4.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, dtree_model4.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, dtree_model4.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, dtree_model4.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, dtree_model4.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(dt_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = dtree_model4.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# plot model
dt = DecisionTreeClassifier(criterion="entropy", max_depth=23, random_state=0)
dt.fit(X_train_sm, y_train_sm)
fig = plt.figure(figsize=(100, 20))
_ = tree.plot_tree(dt)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Naïve Baiyes Classifier.**
# Iniciamos el modelo
nb_classifier = GaussianNB()
# operaciones en orden
operations = [
("nb_classifier", nb_classifier)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador NBC
nb_classifier.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
params_NB = {"var_smoothing": np.logspace(0, -9, num=100)}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
estimator=nb_classifier,
param_grid=params_NB,
cv=5,
scoring=scoring,
refit="sensitivity",
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a GaussianNB(var_smoothing=5.3366992312063123e-05)
# iniciar y configurar las operaciones
nb_classifier8 = GaussianNB(var_smoothing=5.3366992312063123e-05)
operations = [("nb_classifier8", nb_classifier8)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
nbc_auc = roc_auc_score(y_test, nb_classifier8.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, nb_classifier8.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, nb_classifier8.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, nb_classifier8.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, nb_classifier8.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(nbc_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = nb_classifier8.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Support Vector Machine.**
# Iniciamos el modelo
model = SVC(random_state=0)
# operaciones en orden
operations = [
("model", model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador SVM
model.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
params_SVM = {
"C": [0.1, 1, 10, 100, 1000],
"gamma": [1, 0.1, 0.01, 0.001, 0.0001],
"kernel": ["rbf"],
}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
estimator=model,
param_grid=params_SVM,
cv=5,
scoring=scoring,
refit="sensitivity",
verbose=3,
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a SVC(C=1000, gamma=0.0001, random_state=0)
#
# iniciar y configurar las operaciones
# Para que este método funcione debemos modificar el parámetro 'probability' a probability=True
modelCG2 = SVC(C=1000, gamma=0.0001, random_state=0, probability=True)
operations = [("modelCG2", modelCG2)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
svm_auc = roc_auc_score(y_test, modelCG2.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelCG2.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, modelCG2.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, modelCG2.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, modelCG2.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(svm_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = modelCG2.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Logistic Regression.**
# PCA()
pca = decomposition.PCA()
# Iniciamos el modelo
logistic_Reg = linear_model.LogisticRegression(random_state=0)
# operaciones en orden
operations = [
("pca", pca),
("logistic_Reg", logistic_Reg),
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador LR
logistic_Reg.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# modificaremos el 'n_components '
n_components = list(range(1, X_train.shape[1] + 1, 1))
n_components
# La regresión logística requiere que GridSearchCV optimice dos parámetros 'C' y 'penalty'.
# Así que hemos establecido estos dos parámetros como una lista de valores de los cuales GridSearchCV seleccionará el mejor valor del parámetro.
C = np.logspace(-4, 4, 50)
penalty = ["l1", "l2"]
# Ahora estamos creando un diccionario para establecer todas las opciones de parámetros para diferentes módulos.
parameters = dict(
pca__n_components=n_components, logistic_Reg__C=C, logistic_Reg__penalty=penalty
)
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Hacer un objeto clf_GS para GridSearchCV y ajustar el conjunto de datos, es decir, X e y
clf_GS = GridSearchCV(pipe, parameters, cv=5, scoring=scoring, refit="sensitivity")
clf_GS.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
clf_GS.best_estimator_.get_params()
clf_GS.best_estimator_
# El mejor rendimiento está asociado a LogisticRegression(C=109.85411419875572, random_state=0)
# iniciar y configurar las operaciones
logistic_RegC2 = linear_model.LogisticRegression(C=109.85411419875572, random_state=0)
operations = [("logistic_RegC2", logistic_RegC2)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
lr_auc = roc_auc_score(y_test, logistic_RegC2.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, logistic_RegC2.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, logistic_RegC2.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, logistic_RegC2.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, logistic_RegC2.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(lr_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = logistic_RegC2.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Artificial Neural Network.**
# Iniciamos el modelo
mlp = MLPClassifier(max_iter=100, random_state=0)
# operaciones en orden
operations = [("mlp", mlp)] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador LR
mlp.get_params().keys()
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predºict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
parameter_space = {
"hidden_layer_sizes": [(50, 50, 50), (50, 100, 50), (100,)],
"activation": ["tanh", "relu"],
"solver": ["sgd", "adam"],
"alpha": [0.0001, 0.05],
"learning_rate": ["constant", "adaptive"],
}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
mlp, parameter_space, cv=3, scoring=scoring, refit="sensitivity", verbose=3
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a MLPClassifier(hidden_layer_sizes=(50, 50, 50), learning_rate='adaptive', max_iter=100, random_state=0, solver='sgd')
# iniciar y configurar las operaciones
mlpC = MLPClassifier(
hidden_layer_sizes=(50, 50, 50),
learning_rate="adaptive",
max_iter=100,
random_state=0,
solver="sgd",
)
operations = [("mlpC", mlpC)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
mlp_auc = roc_auc_score(y_test, mlpC.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, mlpC.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, mlpC.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, mlpC.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, mlpC.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(mlp_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = mlpC.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Random Forest.**
# Iniciamos el modelo
model = StreamingRFC(random_state=0)
# operaciones en orden
operations = [
("model", model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador RFC
model.get_params().keys()
# entrenamiento del pipeline
import warnings
warnings.filterwarnings("ignore")
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# establecer el parámetro del grid
param_grid = {
"max_depth": range(1, 150, 1),
"min_samples_leaf": [0, 0.025, 0.05, 0.075, 0.1],
"max_features": ["sqrt", "log2"],
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
estimator=model,
param_grid=param_grid,
cv=5,
scoring=scoring,
refit="sensitivity",
verbose=3,
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a StreamingRFC(max_depth=4, max_features='log2', min_samples_leaf=0.025, random_state=0)
#
# iniciar y configurar las operaciones
modelRFC = StreamingRFC(
max_depth=4, max_features="log2", min_samples_leaf=0.025, random_state=0
)
operations = [("modelRFC", modelRFC)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
rfc_auc = roc_auc_score(y_test, modelRFC.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelRFC.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, modelRFC.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, modelRFC.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, modelRFC.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(rfc_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = modelRFC.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# plot model
rf = RandomForestClassifier(
max_depth=4, max_features="log2", min_samples_leaf=0.025, random_state=0
)
rf.fit(X_train_sm, y_train_sm)
plt.figure(figsize=(20, 20))
_ = tree.plot_tree(rf.estimators_[0], feature_names=X_train_sm.columns, filled=True)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Isolation Forest**
# Iniciamos el modelo
model_isf = IsolationForest(random_state=0)
# operaciones en orden
operations = [
("model_isf", model_isf)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador RFC
model_isf.get_params().keys()
# entrenamiento del pipeline
import warnings
warnings.filterwarnings("ignore")
model_isf.fit(X_train_sm, y_train_sm)
y_pred = pd.Series(model_isf.predict(X_test))
y_pred = y_pred.map({1: 0, -1: 1})
tn, fp, fn, tp = confusion_matrix(y_test.round(), y_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(
confusion_matrix(y_test.round(), y_pred), annot=True, cmap="Greys", fmt=".0f"
)
plt.show()
# establecer el parámetro del grid
param_grid = {"n_estimators": range(10, 50, 10), "max_features": range(8, 28, 10)}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
import warnings
warnings.filterwarnings("ignore")
full_cv_classifier = GridSearchCV(
model_isf, param_grid, cv=5, scoring=scoring, refit="sensitivity", verbose=3
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm)
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a IsolationForest(max_features=8, n_estimators=10, random_state=0)
# iniciar y configurar las operaciones
modelIF = IsolationForest(max_features=8, n_estimators=10, random_state=0)
operations = [("modelIF", modelIF)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
y_pred = pd.Series(modelIF.predict(X_test))
y_pred = y_pred.map({1: 0, -1: 1})
import warnings
warnings.filterwarnings("ignore")
if_auc = roc_auc_score(y_test, pipe.predict(X_test))
tn, fp, fn, tp = confusion_matrix(y_test.round(), y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelIF.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(if_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), y_pred, pos_label=0)
roc_auc = metrics.auc(fpr, tpr)
# Graficar roc_curve
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# Imprimir valor del AUC
if_auc = np.trapz(tpr, fpr)
print("AUC:", if_auc)
# plot model
iforst = IsolationForest(max_features=8, n_estimators=10, random_state=0)
iforst.fit(X_train_sm, y_train_sm)
plt.figure(figsize=(20, 20))
_ = tree.plot_tree(iforst.estimators_[0], feature_names=X_train_sm.columns, filled=True)
# #**Ajuste de parámetros con GridSearchCV para el algoritmo Gradient Boosting**
# Iniciamos el modelo
model = GradientBoostingClassifier(random_state=0)
# operaciones en orden
operations = [
("model", model)
] # Observe que están escritos en tuplas dentro de una lista
# configurar el pipeline
pipe = Pipeline(operations)
# Estos son los parámetros que se pueden modificar en el clasificador GBC
model.get_params().keys()
# entrenamiento del pipeline
import warnings
warnings.filterwarnings("ignore")
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("Leyenda »", "tn: ", tn, "fp: ", fp, "fn: ", fn, "tp: ", tp)
plt.figure(figsize=(5, 2))
plt.title("Matriz de Confusión del Pipeline:", fontsize=16)
sns.heatmap(confusion_matrix(y_test, pipe_pred), annot=True, cmap="Greys", fmt=".0f")
plt.show()
# establecer el parámetro del grid
parameters = {
"learning_rate": [0.01, 0.05, 0.1, 0.5, 1],
"min_samples_split": [2, 5, 10, 20],
"max_depth": [2, 3, 5, 10],
}
scoring = {
"sensitivity": make_scorer(recall_score),
"specificity": make_scorer(recall_score, pos_label=0),
}
# Poniendo todo junto
full_cv_classifier = GridSearchCV(
model, parameters, cv=3, scoring=scoring, refit="sensitivity", verbose=3
)
# Entrenamos el Pipeline
full_cv_classifier.fit(X_train_sm, y_train_sm.values.ravel())
# Mejores parámetros del modelo
full_cv_classifier.best_estimator_.get_params()
full_cv_classifier.best_estimator_
# El mejor rendimiento está asociado a GradientBoostingClassifier(learning_rate=1, max_depth=10, min_samples_split=10, random_state=0)
# iniciar y configurar las operaciones
modelGBC = GradientBoostingClassifier(
learning_rate=1, max_depth=10, min_samples_split=10, random_state=0
)
operations = [("modelGBC", modelGBC)]
# configurar el pipeline
pipe = Pipeline(operations)
# entrenamiento del pipeline
pipe.fit(X_train_sm, y_train_sm)
# predicción con el conjunto de prueba
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
gbc_auc = roc_auc_score(y_test, modelGBC.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, modelGBC.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, modelGBC.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, modelGBC.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, modelGBC.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(gbc_auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# calculando el fpr y tpr para todos los thresholds de la clasificación
probs = modelGBC.predict_proba(X_test)
preds = probs[:, 1]
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)
roc_auc = metrics.auc(fpr, tpr)
# method: plt
plt.title("Receiver Operating Characteristic")
plt.plot(fpr, tpr, "b", label="AUC = %0.2f" % roc_auc)
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1], "r--")
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
# #**Elección del método de ML.**
mlp_f1 = 0.008793356130923304
dt_f1 = 0.2650602409638554
gbc_f1 = 0.2608695652173913
if_f1 = 0.05555555555555556
knn_f1 = 0.0
lr_f1 = 0.05520169851380042
nbc_f1 = 0.008479067302596715
rfc_f1 = 0.014204545454545456
svm_f1 = 0.0
mylist = [mlp_f1, dt_f1, gbc_f1, if_f1, knn_f1, lr_f1, nbc_f1, rfc_f1, svm_f1]
best_f1 = 0.0
for x in mylist:
a = x
if a > best_f1:
best_f1 = a
print("El mayor valor de F1 Score está dado para el modelo:")
if best_f1 == mlp_f1:
print("Artificial Neural Network")
if best_f1 == dt_f1:
print("Decision Tree Classifier")
if best_f1 == gbc_f1:
print("Gradient Boosting")
if best_f1 == if_f1:
print("Isolation Forest")
if best_f1 == knn_f1:
print("K-Nearest Neighbors")
if best_f1 == lr_f1:
print("Logistic Regression")
if best_f1 == nbc_f1:
print("Naïve Baiyes Classifier")
if best_f1 == rfc_f1:
print("Random Forest Classifier")
if best_f1 == svm_f1:
print("Support Vector Machine")
# # **Experimento 1: Comparación entre los resultados del Dataset original.**
df1 = df
# Dividimos los datos en entrenamiento y prueba
df_training = df1.head(int(len(df1) * 0.8))
y_train = df_training["Class"]
X_train = df_training.drop("Class", axis=1)
df_test = df.drop(df_training.index)
y_test = df_test["Class"]
X_test = df_test.drop("Class", axis=1)
print("Ejemplos usados para entrenar: ", len(X_train))
print("Ejemplos usados para test: ", len(X_test))
# Mostramos los datos de la columna 'Class' para el conjunto de prueba
y_test.value_counts()
# **Artificial Neural Network**
model = MLPClassifier(max_iter=100, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Decision Tree Classifier**
#
model = DecisionTreeClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Gradient Boosting**
model = GradientBoostingClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Isolation Forest**
model = IsolationForest(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
import warnings
warnings.filterwarnings("ignore")
p_pred = pipe.predict(X_test)
p_pred = p_pred.flatten()
y_pred = np.where(p_pred > 0.5, 1, 0)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), p_pred, pos_label=0)
auc = np.trapz(tpr, fpr)
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
print(
"(accuracy_score) = {}".format(
accuracy_score(y_test.round(), model.predict(X_test))
)
)
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **K-Nearest Neighbors**
model = KNeighborsClassifier()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Logistic Regression**
model = linear_model.LogisticRegression(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Naïve Baiyes Classifier**
model = GaussianNB()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Random Forest Classifier**
model = StreamingRFC(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Support Vector Machine**
model = SVC(probability=True, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# # **Experimento 2: Comparación entre los modelos de ML teniendo en cuenta los resultados de los pasos 3 y 4 de la secuencia KDD (sin tratamiento del desbalance).**
df1 = df.drop_duplicates()
# Dividimos los datos en entrenamiento y prueba
df_training = df1.head(int(len(df1) * 0.8))
y_train = df_training["Class"]
X_train = df_training.drop("Class", axis=1)
df_test = df.drop(df_training.index)
y_test = df_test["Class"]
X_test = df_test.drop("Class", axis=1)
print("Ejemplos usados para entrenar: ", len(X_train))
print("Ejemplos usados para test: ", len(X_test))
# Mostramos los datos de la columna 'Class' para el conjunto de prueba
y_test.value_counts()
# **Artificial Neural Network**
model = MLPClassifier(max_iter=100, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Decision Tree Classifier**
model = DecisionTreeClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Gradient Boosting**
model = GradientBoostingClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Isolation Forest**
model = IsolationForest(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
import warnings
warnings.filterwarnings("ignore")
p_pred = pipe.predict(X_test)
p_pred = p_pred.flatten()
y_pred = np.where(p_pred > 0.5, 1, 0)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), p_pred, pos_label=0)
auc = np.trapz(tpr, fpr)
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
print(
"(accuracy_score) = {}".format(
accuracy_score(y_test.round(), model.predict(X_test))
)
)
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **K-Nearest Neighbors**
model = KNeighborsClassifier()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Logistic Regression**
model = linear_model.LogisticRegression(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Naïve Baiyes Classifier**
model = GaussianNB()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Random Forest Classifier**
model = StreamingRFC(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Support Vector Machine**
model = SVC(probability=True, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train, y_train)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# # **Experimento 3: Comparación entre los modelos de ML tras aplicar SMOTE.**
# SMOTE: Synthetic Minority Oversampling Technique
counter = Counter(y_train)
print("Antes", counter)
# oversampling the train dataset using SMOTE
smt = SMOTE(random_state=0)
# X_train, y_train=smt.fit_resample(X_train, y_train)
X_train_sm, y_train_sm = smt.fit_resample(X_train, y_train)
counter = Counter(y_train_sm)
print("Después", counter)
# **Artificial Neural Network**
model = MLPClassifier(max_iter=100, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Decision Tree Classifier**
model = DecisionTreeClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Gradient Boosting**
model = GradientBoostingClassifier(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Isolation Forest**
model = IsolationForest(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
import warnings
warnings.filterwarnings("ignore")
p_pred = pipe.predict(X_test)
p_pred = p_pred.flatten()
y_pred = np.where(p_pred > 0.5, 1, 0)
fpr, tpr, thresholds = metrics.roc_curve(y_test.round(), p_pred, pos_label=0)
auc = np.trapz(tpr, fpr)
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
print(
"(accuracy_score) = {}".format(
accuracy_score(y_test.round(), model.predict(X_test))
)
)
p = precision_score(y_test.round(), y_pred, average="binary")
print("(precision_score) = {}".format(p))
r = recall_score(y_test.round(), y_pred, average="binary")
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test.round(), y_pred, average="binary")
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **K-Nearest Neighbors**
model = KNeighborsClassifier()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Logistic Regression**
model = linear_model.LogisticRegression(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Naïve Baiyes Classifier**
model = GaussianNB()
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Random Forest Classifier**
model = StreamingRFC(random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
# **Support Vector Machine**
model = SVC(probability=True, random_state=0)
operations = [("model", model)]
pipe = Pipeline(operations)
pipe.fit(X_train_sm, y_train_sm)
pipe_pred = pipe.predict(X_test)
import warnings
warnings.filterwarnings("ignore")
auc = roc_auc_score(y_test, model.predict_proba(X_test)[:, 1])
tn, fp, fn, tp = confusion_matrix(y_test, pipe_pred).ravel()
print("tn : ", tn)
print("fp : ", fp)
print("fn : ", fn)
print("tp : ", tp)
a = accuracy_score(y_test, model.predict(X_test))
print("(accuracy_score) = {}".format(a))
p = precision_score(y_test, model.predict(X_test))
print("(precision_score) = {}".format(p))
r = recall_score(y_test, model.predict(X_test))
print("(recall_score) = {}".format(r))
f1 = f1_score(y_test, model.predict(X_test))
print("(f1_score) = {}".format(f1))
print("(auc_score) = {}".format(auc))
specificity = tn / (tn + fp)
print("specificity : ", specificity)
sensitivity = tp / (tp + fn)
print("sensitivity : ", sensitivity)
G_Mean = np.sqrt(sensitivity * specificity)
print("G-Mean : ", G_Mean)
| false | 0 | 24,317 | 0 | 26,190 | 24,317 |
||
129969333
|
<jupyter_start><jupyter_text>images_reorganized1
Kaggle dataset identifier: images-reorganized1
<jupyter_script># # Objective:
# Develop an algorithm which will identify the genre when provided with a painting, with state of the art precision.
# ## Read data
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
import os
from tqdm import tqdm, tqdm_notebook
import random
import cv2
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
from tensorflow.keras.applications import *
from tensorflow.keras.callbacks import *
from tensorflow.keras.initializers import *
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from numpy.random import seed
from keras import regularizers
artists = pd.read_csv("../input/artists2/artists2.csv")
artist_use = artists[["genre", "paintings"]]
# ## Data Processing
# Explore images of top artists
images_dir = "../input/genrerec3/archive"
artists_dirs = os.listdir(images_dir)
artists_temp = artist_use.groupby(["genre"]).sum().reset_index()
artists_temp = artists_temp.sort_values(by=["paintings"], ascending=False)
artists_temp = artists_temp[artists_temp["paintings"] >= 500].reset_index()
artists_temp = artists_temp[artists_temp["paintings"] != 1048].reset_index()
artists_temp["paintings"]
artists_temp
artists_temp["class_weight"] = artists_temp.paintings.sum() / (
artists_temp.shape[0] * artists_temp.paintings
)
artists_genre = np.array(artists_temp["genre"])
artists_genre = np.unique(artists_genre)
class_weights = artists_temp["class_weight"].to_dict()
class_weights
images_dir = "../input/genrerec3/archive"
artists_dirs = os.listdir(images_dir)
for name in artists_genre:
if os.path.exists(os.path.join(images_dir, name)):
print("Found -->", os.path.join(images_dir, name))
else:
print("Did not find -->", os.path.join(images_dir, name))
import os
import numpy as np
from PIL import Image
def extract_edge_features(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert to grey scale
gray = cv2.GaussianBlur(gray, (5, 5), 0.2) # Gaussian filter
edges = cv2.Canny(gray, 50, 150) # Canny edge
edges = cv2.resize(edges, (IMG_SIZE, IMG_SIZE))
return edges.flatten()
# read images from folder and extract feature
def read_images_from_folder(folder):
X = []
label_folder = folder
for filename in os.listdir(label_folder):
img_path = os.path.join(label_folder, filename)
img = cv2.imread(img_path)
edges = extract_edge_features(img)
X.append(edges)
X = np.array(X)
return X
def read_images_from_folder_new(folder):
X = []
label_folder = folder
for name in os.listdir(label_folder):
# Construct the full path to the image
subfile_path = os.path.join(label_folder, name)
for filename in os.listdir(subfile_path):
file_path = os.path.join(subfile_path, filename)
img = cv2.imread(file_path)
edges = extract_edge_features(img)
X.append(edges)
X = np.array(X)
return X
IMG_SIZE = 224
imp_dir_path = "../input/genrerec3/archive/Impressionism"
imp_X_train = read_images_from_folder(imp_dir_path)
baq_dir_path = "../input/genrerec3/archive/Baroque"
baq_X_train = read_images_from_folder(baq_dir_path)
nr_dir_path = "../input/genrerec3/archive/Northern Renaissance"
nr_X_train = read_images_from_folder_new(nr_dir_path)
X = np.append(imp_X_train, baq_X_train, axis=0)
X = np.append(X, nr_X_train, axis=0)
Y = (
[[1, 0, 0]] * len(imp_X_train)
+ [[0, 1, 0]] * len(baq_X_train)
+ [[0, 0, 1]] * len(nr_X_train)
)
x = []
for i in range(len(X)):
x.append(X[i].reshape((224, 224)))
x = np.array(x)
Y = np.array(Y)
X_train, X_test, y_train, y_test = train_test_split(
x, Y, test_size=0.25, random_state=0
)
# ## Data Augmentation
# Augment data
batch_size = 16
train_input_shape = (224, 224, 1)
n_classes = len(artists_temp)
train_datagen = ImageDataGenerator(
validation_split=0.2,
rescale=1.0 / 255.0,
zoom_range=0.7,
horizontal_flip=True,
vertical_flip=True,
)
train_generator = train_datagen.flow_from_directory(
directory=images_dir,
class_mode="categorical",
target_size=train_input_shape[0:2],
batch_size=batch_size,
subset="training",
shuffle=True,
classes=artists_genre.tolist(),
)
valid_generator = train_datagen.flow_from_directory(
directory=images_dir,
class_mode="categorical",
target_size=train_input_shape[0:2],
batch_size=batch_size,
subset="validation",
shuffle=True,
classes=artists_genre.tolist(),
)
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
print("Total number of batches =", STEP_SIZE_TRAIN, "and", STEP_SIZE_VALID)
# ## Build Model
base_model = Xception(weights=None, include_top=False, input_shape=train_input_shape)
for layer in base_model.layers:
layer.trainable = True
# Add layers at the end
X = base_model.output
X = Flatten()(X)
X = Dense(512, kernel_initializer="he_uniform")(X)
X = BatchNormalization()(X)
X = Activation("relu")(X)
X = Dropout(0.5)(X)
X = Dense(16, kernel_initializer="he_uniform")(X)
X = Dropout(0.5)(X)
X = BatchNormalization()(X)
X = Activation("relu")(X)
output = Dense(3, activation="softmax")(X)
model = Model(inputs=base_model.input, outputs=output)
optimizer = Adam(lr=0.0001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
n_epoch = 10
early_stop = EarlyStopping(
monitor="val_loss", patience=20, verbose=1, mode="auto", restore_best_weights=True
)
# reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5,
# verbose=1, mode='auto')
# Train the model - all layers
history1 = model.fit(
x=x,
y=Y,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=n_epoch,
shuffle=True,
verbose=1,
# callbacks=[reduce_lr],
workers=16,
)
# Freeze core ResNet layers and train again
for layer in model.layers:
layer.trainable = False
for layer in model.layers[:50]:
layer.trainable = True
optimizer = Adam(lr=0.0001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
n_epoch = 50
history2 = model.fit(
x=x,
y=Y,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=n_epoch,
shuffle=True,
verbose=1,
# callbacks=[reduce_lr],
workers=16,
)
# ## Training graph
# Merge history1 and history2
history = {}
history["loss"] = history1.history["loss"] + history2.history["loss"]
history["accuracy"] = history1.history["accuracy"] + history2.history["accuracy"]
history["val_loss"] = history1.history["val_loss"] + history2.history["val_loss"]
history["val_accuracy"] = (
history1.history["val_accuracy"] + history2.history["val_accuracy"]
)
# history['learning_rate'] = history1.history['learning_rate'] + history2.history['learning_rate']
# Plot the training graph
def plot_training(history):
acc = history["accuracy"]
val_acc = history["val_accuracy"]
loss = history["loss"]
val_loss = history["val_loss"]
epochs = range(len(loss))
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(epochs, acc, "r-", label="Training Accuracy")
axes[0].plot(epochs, val_acc, "b--", label="Validation Accuracy")
axes[0].set_title("Training and Validation Accuracy")
axes[0].legend(loc="best")
axes[1].plot(epochs, loss, "r-", label="Training Loss")
axes[1].plot(epochs, val_loss, "b--", label="Validation Loss")
axes[1].set_title("Training and Validation Loss")
axes[1].legend(loc="best")
plt.show()
plot_training(history)
# ## Evaluate performance
# Prediction accuracy on train data
score = model.evaluate_generator(train_generator, verbose=1)
print("Prediction accuracy on train data =%.3f" % score[1])
# Prediction accuracy on CV data
score = model.evaluate_generator(valid_generator, verbose=1)
print("Prediction accuracy on CV data =%.3f" % score[1])
# ## Confusion Matrix.
# Classification report and confusion matrix
from sklearn.metrics import *
import seaborn as sns
tick_labels = artists_genre
def showClassficationReport_Generator(model, valid_generator, STEP_SIZE_VALID):
# Loop on each generator batch and predict
y_pred, y_true = [], []
for i in range(STEP_SIZE_VALID):
(X, y) = next(valid_generator)
y_pred.append(model.predict(X))
y_true.append(y)
# Create a flat list for y_true and y_pred
y_pred = [subresult for result in y_pred for subresult in result]
y_true = [subresult for result in y_true for subresult in result]
# Update Truth vector based on argmax
y_true = np.argmax(y_true, axis=1)
y_true = np.asarray(y_true).ravel()
# Update Prediction vector based on argmax
y_pred = np.argmax(y_pred, axis=1)
y_pred = np.asarray(y_pred).ravel()
# Confusion Matrix
fig, ax = plt.subplots(figsize=(10, 10))
conf_matrix = confusion_matrix(y_true, y_pred, labels=np.arange(n_classes))
conf_matrix = conf_matrix / np.sum(conf_matrix, axis=1)
sns.heatmap(
conf_matrix,
annot=True,
fmt=".2f",
square=True,
cbar=False,
cmap=plt.cm.jet,
xticklabels=tick_labels,
yticklabels=tick_labels,
ax=ax,
)
ax.set_ylabel("Actual")
ax.set_xlabel("Predicted")
ax.set_title("Confusion Matrix")
plt.show()
print("Classification Report:")
print(
classification_report(
y_true, y_pred, labels=np.arange(n_classes), target_names=artists_genre
)
)
showClassficationReport_Generator(model, valid_generator, STEP_SIZE_VALID)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/969/129969333.ipynb
|
images-reorganized1
|
keldon
|
[{"Id": 129969333, "ScriptId": 38651102, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6189082, "CreationDate": "05/17/2023 19:50:45", "VersionNumber": 2.0, "Title": "Identify genre from Art with Xception f1c1ca", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 309.0, "LinesInsertedFromPrevious": 62.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 247.0, "LinesInsertedFromFork": 62.0, "LinesDeletedFromFork": 205.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 247.0, "TotalVotes": 0}]
|
[{"Id": 186408188, "KernelVersionId": 129969333, "SourceDatasetVersionId": 3251836}, {"Id": 186408187, "KernelVersionId": 129969333, "SourceDatasetVersionId": 3235471}, {"Id": 186408186, "KernelVersionId": 129969333, "SourceDatasetVersionId": 310927}, {"Id": 186408190, "KernelVersionId": 129969333, "SourceDatasetVersionId": 5659667}, {"Id": 186408189, "KernelVersionId": 129969333, "SourceDatasetVersionId": 3260882}]
|
[{"Id": 3251836, "DatasetId": 1970629, "DatasourceVersionId": 3302126, "CreatorUserId": 6189082, "LicenseName": "Unknown", "CreationDate": "03/03/2022 18:09:01", "VersionNumber": 1.0, "Title": "images_reorganized1", "Slug": "images-reorganized1", "Subtitle": "made from artist csv", "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1970629, "CreatorUserId": 6189082, "OwnerUserId": 6189082.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3251836.0, "CurrentDatasourceVersionId": 3302126.0, "ForumId": 1994800, "Type": 2, "CreationDate": "03/03/2022 18:09:01", "LastActivityDate": "03/03/2022", "TotalViews": 37, "TotalDownloads": 0, "TotalVotes": 0, "TotalKernels": 7}]
|
[{"Id": 6189082, "UserName": "keldon", "DisplayName": "Keldon", "RegisterDate": "11/18/2020", "PerformanceTier": 0}]
|
# # Objective:
# Develop an algorithm which will identify the genre when provided with a painting, with state of the art precision.
# ## Read data
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
import os
from tqdm import tqdm, tqdm_notebook
import random
import cv2
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
from tensorflow.keras.applications import *
from tensorflow.keras.callbacks import *
from tensorflow.keras.initializers import *
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from numpy.random import seed
from keras import regularizers
artists = pd.read_csv("../input/artists2/artists2.csv")
artist_use = artists[["genre", "paintings"]]
# ## Data Processing
# Explore images of top artists
images_dir = "../input/genrerec3/archive"
artists_dirs = os.listdir(images_dir)
artists_temp = artist_use.groupby(["genre"]).sum().reset_index()
artists_temp = artists_temp.sort_values(by=["paintings"], ascending=False)
artists_temp = artists_temp[artists_temp["paintings"] >= 500].reset_index()
artists_temp = artists_temp[artists_temp["paintings"] != 1048].reset_index()
artists_temp["paintings"]
artists_temp
artists_temp["class_weight"] = artists_temp.paintings.sum() / (
artists_temp.shape[0] * artists_temp.paintings
)
artists_genre = np.array(artists_temp["genre"])
artists_genre = np.unique(artists_genre)
class_weights = artists_temp["class_weight"].to_dict()
class_weights
images_dir = "../input/genrerec3/archive"
artists_dirs = os.listdir(images_dir)
for name in artists_genre:
if os.path.exists(os.path.join(images_dir, name)):
print("Found -->", os.path.join(images_dir, name))
else:
print("Did not find -->", os.path.join(images_dir, name))
import os
import numpy as np
from PIL import Image
def extract_edge_features(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert to grey scale
gray = cv2.GaussianBlur(gray, (5, 5), 0.2) # Gaussian filter
edges = cv2.Canny(gray, 50, 150) # Canny edge
edges = cv2.resize(edges, (IMG_SIZE, IMG_SIZE))
return edges.flatten()
# read images from folder and extract feature
def read_images_from_folder(folder):
X = []
label_folder = folder
for filename in os.listdir(label_folder):
img_path = os.path.join(label_folder, filename)
img = cv2.imread(img_path)
edges = extract_edge_features(img)
X.append(edges)
X = np.array(X)
return X
def read_images_from_folder_new(folder):
X = []
label_folder = folder
for name in os.listdir(label_folder):
# Construct the full path to the image
subfile_path = os.path.join(label_folder, name)
for filename in os.listdir(subfile_path):
file_path = os.path.join(subfile_path, filename)
img = cv2.imread(file_path)
edges = extract_edge_features(img)
X.append(edges)
X = np.array(X)
return X
IMG_SIZE = 224
imp_dir_path = "../input/genrerec3/archive/Impressionism"
imp_X_train = read_images_from_folder(imp_dir_path)
baq_dir_path = "../input/genrerec3/archive/Baroque"
baq_X_train = read_images_from_folder(baq_dir_path)
nr_dir_path = "../input/genrerec3/archive/Northern Renaissance"
nr_X_train = read_images_from_folder_new(nr_dir_path)
X = np.append(imp_X_train, baq_X_train, axis=0)
X = np.append(X, nr_X_train, axis=0)
Y = (
[[1, 0, 0]] * len(imp_X_train)
+ [[0, 1, 0]] * len(baq_X_train)
+ [[0, 0, 1]] * len(nr_X_train)
)
x = []
for i in range(len(X)):
x.append(X[i].reshape((224, 224)))
x = np.array(x)
Y = np.array(Y)
X_train, X_test, y_train, y_test = train_test_split(
x, Y, test_size=0.25, random_state=0
)
# ## Data Augmentation
# Augment data
batch_size = 16
train_input_shape = (224, 224, 1)
n_classes = len(artists_temp)
train_datagen = ImageDataGenerator(
validation_split=0.2,
rescale=1.0 / 255.0,
zoom_range=0.7,
horizontal_flip=True,
vertical_flip=True,
)
train_generator = train_datagen.flow_from_directory(
directory=images_dir,
class_mode="categorical",
target_size=train_input_shape[0:2],
batch_size=batch_size,
subset="training",
shuffle=True,
classes=artists_genre.tolist(),
)
valid_generator = train_datagen.flow_from_directory(
directory=images_dir,
class_mode="categorical",
target_size=train_input_shape[0:2],
batch_size=batch_size,
subset="validation",
shuffle=True,
classes=artists_genre.tolist(),
)
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
print("Total number of batches =", STEP_SIZE_TRAIN, "and", STEP_SIZE_VALID)
# ## Build Model
base_model = Xception(weights=None, include_top=False, input_shape=train_input_shape)
for layer in base_model.layers:
layer.trainable = True
# Add layers at the end
X = base_model.output
X = Flatten()(X)
X = Dense(512, kernel_initializer="he_uniform")(X)
X = BatchNormalization()(X)
X = Activation("relu")(X)
X = Dropout(0.5)(X)
X = Dense(16, kernel_initializer="he_uniform")(X)
X = Dropout(0.5)(X)
X = BatchNormalization()(X)
X = Activation("relu")(X)
output = Dense(3, activation="softmax")(X)
model = Model(inputs=base_model.input, outputs=output)
optimizer = Adam(lr=0.0001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
n_epoch = 10
early_stop = EarlyStopping(
monitor="val_loss", patience=20, verbose=1, mode="auto", restore_best_weights=True
)
# reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5,
# verbose=1, mode='auto')
# Train the model - all layers
history1 = model.fit(
x=x,
y=Y,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=n_epoch,
shuffle=True,
verbose=1,
# callbacks=[reduce_lr],
workers=16,
)
# Freeze core ResNet layers and train again
for layer in model.layers:
layer.trainable = False
for layer in model.layers[:50]:
layer.trainable = True
optimizer = Adam(lr=0.0001)
model.compile(
loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]
)
n_epoch = 50
history2 = model.fit(
x=x,
y=Y,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=n_epoch,
shuffle=True,
verbose=1,
# callbacks=[reduce_lr],
workers=16,
)
# ## Training graph
# Merge history1 and history2
history = {}
history["loss"] = history1.history["loss"] + history2.history["loss"]
history["accuracy"] = history1.history["accuracy"] + history2.history["accuracy"]
history["val_loss"] = history1.history["val_loss"] + history2.history["val_loss"]
history["val_accuracy"] = (
history1.history["val_accuracy"] + history2.history["val_accuracy"]
)
# history['learning_rate'] = history1.history['learning_rate'] + history2.history['learning_rate']
# Plot the training graph
def plot_training(history):
acc = history["accuracy"]
val_acc = history["val_accuracy"]
loss = history["loss"]
val_loss = history["val_loss"]
epochs = range(len(loss))
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(epochs, acc, "r-", label="Training Accuracy")
axes[0].plot(epochs, val_acc, "b--", label="Validation Accuracy")
axes[0].set_title("Training and Validation Accuracy")
axes[0].legend(loc="best")
axes[1].plot(epochs, loss, "r-", label="Training Loss")
axes[1].plot(epochs, val_loss, "b--", label="Validation Loss")
axes[1].set_title("Training and Validation Loss")
axes[1].legend(loc="best")
plt.show()
plot_training(history)
# ## Evaluate performance
# Prediction accuracy on train data
score = model.evaluate_generator(train_generator, verbose=1)
print("Prediction accuracy on train data =%.3f" % score[1])
# Prediction accuracy on CV data
score = model.evaluate_generator(valid_generator, verbose=1)
print("Prediction accuracy on CV data =%.3f" % score[1])
# ## Confusion Matrix.
# Classification report and confusion matrix
from sklearn.metrics import *
import seaborn as sns
tick_labels = artists_genre
def showClassficationReport_Generator(model, valid_generator, STEP_SIZE_VALID):
# Loop on each generator batch and predict
y_pred, y_true = [], []
for i in range(STEP_SIZE_VALID):
(X, y) = next(valid_generator)
y_pred.append(model.predict(X))
y_true.append(y)
# Create a flat list for y_true and y_pred
y_pred = [subresult for result in y_pred for subresult in result]
y_true = [subresult for result in y_true for subresult in result]
# Update Truth vector based on argmax
y_true = np.argmax(y_true, axis=1)
y_true = np.asarray(y_true).ravel()
# Update Prediction vector based on argmax
y_pred = np.argmax(y_pred, axis=1)
y_pred = np.asarray(y_pred).ravel()
# Confusion Matrix
fig, ax = plt.subplots(figsize=(10, 10))
conf_matrix = confusion_matrix(y_true, y_pred, labels=np.arange(n_classes))
conf_matrix = conf_matrix / np.sum(conf_matrix, axis=1)
sns.heatmap(
conf_matrix,
annot=True,
fmt=".2f",
square=True,
cbar=False,
cmap=plt.cm.jet,
xticklabels=tick_labels,
yticklabels=tick_labels,
ax=ax,
)
ax.set_ylabel("Actual")
ax.set_xlabel("Predicted")
ax.set_title("Confusion Matrix")
plt.show()
print("Classification Report:")
print(
classification_report(
y_true, y_pred, labels=np.arange(n_classes), target_names=artists_genre
)
)
showClassficationReport_Generator(model, valid_generator, STEP_SIZE_VALID)
| false | 1 | 3,088 | 0 | 3,113 | 3,088 |
||
129969878
|
# # Staionary Timeseris
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
# Generate synthetic data for a stationary and seasonal time series
np.random.seed(0)
index = pd.date_range(start="2022-01-01", end="2022-12-31", freq="D")
seasonality = np.sin(2 * np.pi * np.arange(len(index)) / 365)
noise = np.random.normal(0, 0.1, size=len(index))
data = seasonality + noise
df = pd.DataFrame(data, index=index, columns=["Value"])
# Plot the time series
plt.figure(figsize=(10, 4))
plt.plot(df.index, df["Value"])
plt.xlabel("Date")
plt.ylabel("Value")
plt.title("Stationary and Seasonal Time Series")
plt.show()
# Plot the autocorrelation function (ACF)
plt.figure(figsize=(10, 4))
plot_acf(df["Value"], lags=30)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.title("Autocorrelation Function (ACF) - Stationary and Seasonal Time Series")
plt.show()
# # Stationary Time Series with Decaying Auto Correlation Function (ACF)
#
# Generate synthetic data for a stationary time series with a decaying ACF
np.random.seed(0)
index = pd.date_range(start="2022-01-01", end="2022-12-31", freq="D")
data = np.random.normal(0, 1, size=len(index))
df = pd.DataFrame(data, index=index, columns=["Value"])
# Plot the time series
plt.figure(figsize=(10, 4))
plt.plot(df.index, df["Value"])
plt.xlabel("Date")
plt.ylabel("Value")
plt.title("Stationary Time Series with Decaying ACF")
plt.show()
# Plot the autocorrelation function (ACF)
plt.figure(figsize=(10, 4))
plot_acf(df["Value"], lags=30)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.title("Autocorrelation Function (ACF) - Stationary Time Series with Decaying ACF")
plt.show()
# # Non-Stationary Time Series with Decaying ACF
# Generate synthetic data for a stationary time series with a decaying ACF
np.random.seed(0)
index = pd.date_range(start="2022-01-01", end="2022-12-31", freq="D")
data = np.random.normal(0, 1, size=len(index))
df = pd.DataFrame(data, index=index, columns=["Value"])
# Plot the time series
plt.figure(figsize=(10, 4))
plt.plot(df.index, df["Value"])
plt.xlabel("Date")
plt.ylabel("Value")
plt.title("Stationary Time Series with Decaying ACF")
plt.show()
# Plot the autocorrelation function (ACF)
plt.figure(figsize=(10, 4))
plot_acf(df["Value"], lags=30)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.title("Autocorrelation Function (ACF) - Stationary Time Series with Decaying ACF")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/969/129969878.ipynb
| null | null |
[{"Id": 129969878, "ScriptId": 38661829, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11890312, "CreationDate": "05/17/2023 19:57:19", "VersionNumber": 1.0, "Title": "Staitionary and No stationary Timesereis", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 84.0, "LinesInsertedFromPrevious": 84.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
| null | null | null | null |
# # Staionary Timeseris
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
# Generate synthetic data for a stationary and seasonal time series
np.random.seed(0)
index = pd.date_range(start="2022-01-01", end="2022-12-31", freq="D")
seasonality = np.sin(2 * np.pi * np.arange(len(index)) / 365)
noise = np.random.normal(0, 0.1, size=len(index))
data = seasonality + noise
df = pd.DataFrame(data, index=index, columns=["Value"])
# Plot the time series
plt.figure(figsize=(10, 4))
plt.plot(df.index, df["Value"])
plt.xlabel("Date")
plt.ylabel("Value")
plt.title("Stationary and Seasonal Time Series")
plt.show()
# Plot the autocorrelation function (ACF)
plt.figure(figsize=(10, 4))
plot_acf(df["Value"], lags=30)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.title("Autocorrelation Function (ACF) - Stationary and Seasonal Time Series")
plt.show()
# # Stationary Time Series with Decaying Auto Correlation Function (ACF)
#
# Generate synthetic data for a stationary time series with a decaying ACF
np.random.seed(0)
index = pd.date_range(start="2022-01-01", end="2022-12-31", freq="D")
data = np.random.normal(0, 1, size=len(index))
df = pd.DataFrame(data, index=index, columns=["Value"])
# Plot the time series
plt.figure(figsize=(10, 4))
plt.plot(df.index, df["Value"])
plt.xlabel("Date")
plt.ylabel("Value")
plt.title("Stationary Time Series with Decaying ACF")
plt.show()
# Plot the autocorrelation function (ACF)
plt.figure(figsize=(10, 4))
plot_acf(df["Value"], lags=30)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.title("Autocorrelation Function (ACF) - Stationary Time Series with Decaying ACF")
plt.show()
# # Non-Stationary Time Series with Decaying ACF
# Generate synthetic data for a stationary time series with a decaying ACF
np.random.seed(0)
index = pd.date_range(start="2022-01-01", end="2022-12-31", freq="D")
data = np.random.normal(0, 1, size=len(index))
df = pd.DataFrame(data, index=index, columns=["Value"])
# Plot the time series
plt.figure(figsize=(10, 4))
plt.plot(df.index, df["Value"])
plt.xlabel("Date")
plt.ylabel("Value")
plt.title("Stationary Time Series with Decaying ACF")
plt.show()
# Plot the autocorrelation function (ACF)
plt.figure(figsize=(10, 4))
plot_acf(df["Value"], lags=30)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.title("Autocorrelation Function (ACF) - Stationary Time Series with Decaying ACF")
plt.show()
| false | 0 | 881 | 2 | 881 | 881 |
||
129969007
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
from sklearn.model_selection import train_test_split
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
assert x_train.shape == (50000, 32, 32, 3)
assert x_test.shape == (10000, 32, 32, 3)
assert y_train.shape == (50000, 1)
assert y_test.shape == (10000, 1)
# The shape assertions ensure that the loaded data has the expected shapes. Specifically, x_train.shape is asserted to be (50000, 32, 32, 3), indicating that there are 50,000 training images, each with dimensions of 32x32 pixels and three color channels. Similarly, x_test.shape is asserted to be (10000, 32, 32, 3), representing 10,000 test images with the same image dimensions and color channels.
# The y_train.shape assertion confirms that y_train has a shape of (50000, 1), indicating that there are 50,000 corresponding labels for the training images. Similarly, y_test.shape asserts that y_test has a shape of (10000, 1), signifying 10,000 labels for the test images.
y_train.shape
# scaled to the range between 0 and 1 by dividing each pixel value by 255.0.
x_train = x_train / 255.0
x_test = x_test / 255.0
# The labels are one-hot encoded using the tf.keras.utils.to_categorical function.
# One-hot encode the labels
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
# Split the training data into training and validation sets
x_train, x_val, y_train, y_val = train_test_split(
x_train, y_train, test_size=0.2, random_state=42
)
# Print the shapes of the preprocessed data
print("x_train shape:", x_train.shape)
print("y_train shape:", y_train.shape)
print("x_val shape:", x_val.shape)
print("y_val shape:", y_val.shape)
print("x_test shape:", x_test.shape)
print("y_test shape:", y_test.shape)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(
32, (3, 3), activation="relu", padding="same", input_shape=(32, 32, 3)
),
tf.keras.layers.Conv2D(32, (3, 3), activation="relu", padding="same"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu", padding="same"),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu", padding="same"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(128, (3, 3), activation="relu", padding="same"),
tf.keras.layers.Conv2D(128, (3, 3), activation="relu", padding="same"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation="softmax"),
]
)
# Print the model summary
model.summary()
# In this example, the model consists of several convolutional (Conv2D) layers, max pooling (MaxPooling2D) layers, a flatten layer, and dense (Dense) layers. The convolutional layers are responsible for capturing spatial patterns in the images, while the pooling layers reduce the spatial dimensions. The flatten layer converts the 2D feature maps into a 1D vector, and the dense layers are responsible for classification.
# The Dropout layers help prevent overfitting by randomly setting a fraction of the input units to 0 at each training step.
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# Train the model
history = model.fit(
x_train, y_train, batch_size=128, epochs=20, validation_data=(x_val, y_val)
)
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(14, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
# Make predictions on new, unseen data
predictions = model.predict(x_test)
# Convert predictions to class labels
predicted_labels = tf.argmax(predictions, axis=1)
predictions.round(2)
y_pred = predicted_classes = np.argmax(predictions, axis=1)
y_pred
classes_names = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]
np.array(classes_names)[y_pred]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/969/129969007.ipynb
| null | null |
[{"Id": 129969007, "ScriptId": 38657475, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8615026, "CreationDate": "05/17/2023 19:46:38", "VersionNumber": 1.0, "Title": "Classification by ciraf10", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 109.0, "LinesInsertedFromPrevious": 109.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
from sklearn.model_selection import train_test_split
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
assert x_train.shape == (50000, 32, 32, 3)
assert x_test.shape == (10000, 32, 32, 3)
assert y_train.shape == (50000, 1)
assert y_test.shape == (10000, 1)
# The shape assertions ensure that the loaded data has the expected shapes. Specifically, x_train.shape is asserted to be (50000, 32, 32, 3), indicating that there are 50,000 training images, each with dimensions of 32x32 pixels and three color channels. Similarly, x_test.shape is asserted to be (10000, 32, 32, 3), representing 10,000 test images with the same image dimensions and color channels.
# The y_train.shape assertion confirms that y_train has a shape of (50000, 1), indicating that there are 50,000 corresponding labels for the training images. Similarly, y_test.shape asserts that y_test has a shape of (10000, 1), signifying 10,000 labels for the test images.
y_train.shape
# scaled to the range between 0 and 1 by dividing each pixel value by 255.0.
x_train = x_train / 255.0
x_test = x_test / 255.0
# The labels are one-hot encoded using the tf.keras.utils.to_categorical function.
# One-hot encode the labels
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
# Split the training data into training and validation sets
x_train, x_val, y_train, y_val = train_test_split(
x_train, y_train, test_size=0.2, random_state=42
)
# Print the shapes of the preprocessed data
print("x_train shape:", x_train.shape)
print("y_train shape:", y_train.shape)
print("x_val shape:", x_val.shape)
print("y_val shape:", y_val.shape)
print("x_test shape:", x_test.shape)
print("y_test shape:", y_test.shape)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(
32, (3, 3), activation="relu", padding="same", input_shape=(32, 32, 3)
),
tf.keras.layers.Conv2D(32, (3, 3), activation="relu", padding="same"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu", padding="same"),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu", padding="same"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(128, (3, 3), activation="relu", padding="same"),
tf.keras.layers.Conv2D(128, (3, 3), activation="relu", padding="same"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation="softmax"),
]
)
# Print the model summary
model.summary()
# In this example, the model consists of several convolutional (Conv2D) layers, max pooling (MaxPooling2D) layers, a flatten layer, and dense (Dense) layers. The convolutional layers are responsible for capturing spatial patterns in the images, while the pooling layers reduce the spatial dimensions. The flatten layer converts the 2D feature maps into a 1D vector, and the dense layers are responsible for classification.
# The Dropout layers help prevent overfitting by randomly setting a fraction of the input units to 0 at each training step.
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# Train the model
history = model.fit(
x_train, y_train, batch_size=128, epochs=20, validation_data=(x_val, y_val)
)
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(14, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
# Make predictions on new, unseen data
predictions = model.predict(x_test)
# Convert predictions to class labels
predicted_labels = tf.argmax(predictions, axis=1)
predictions.round(2)
y_pred = predicted_classes = np.argmax(predictions, axis=1)
y_pred
classes_names = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]
np.array(classes_names)[y_pred]
| false | 0 | 1,574 | 4 | 1,574 | 1,574 |
||
129969426
|
<jupyter_start><jupyter_text>Brian Tumor Dataset
### Context
This dataset consists of the scanned images of brain of patient diagnosed of brain tumour.
### Content
Separated files for train and test data with separating features and labels
Kaggle dataset identifier: brian-tumor-dataset
<jupyter_script># # Brain Tumour Classifier
# This image classifier was built as an experiment for lesson 2 of Fast.ai's ML course. I'm using it to learn the basics of their library on a meaningful dataset.
# # Initialising the Datablock and Dataloader
from fastai.data.all import *
from fastai.vision.all import *
# First I define a label function for the data loader. The possible categories are "Healthy" if the file name contains the string "Not Cancer", else it is labelled as "Tumour"
def is_healthy(file_name):
return file_name.startswith("Not") and not file_name == "Not Cancer (1).jpeg"
def label_y(x):
file_name = str(os.path.split(x)[1])
return "Healthy" if is_healthy(file_name) else "Tumour"
# Next, I create a datablock that takes images as x labels, and categories as y labels. I print the vocabulary created by the data block. The images are resized to be 256x256 and a random validation split is made.
path = "/kaggle/input/brian-tumor-dataset/Brain Tumor Data Set/Brain Tumor Data Set"
datablock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=label_y,
splitter=RandomSplitter(valid_pct=0.2),
item_tfms=Resize(256),
)
datasets = datablock.datasets(path)
print(f"Categories: {datasets.vocab}")
dataloaders = datablock.dataloaders(path)
# # Fine Tuning
# A vision learner based on ResNet-18 is defined and the model is fine tuned for 3 epochs.
learn = vision_learner(dataloaders, resnet18, metrics=error_rate)
learn.fine_tune(3)
learn.export("tumour_classifier_model.pkl")
# # Visualising Accuracy
# I plot a confusion matrix and display data from the top 15 losses that occured during validation
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
interp.plot_top_losses(15)
# # Making a Prediction
# I create a function to classify a sample image using the model I just trained. The probability of the image belonging to either category is displayed.
sample_image = PILImage.create(f"{path}/Healthy/Not Cancer (1).jpeg")
sample_image.thumbnail((192, 192))
sample_image
def classify_image(image):
prediction, _, probability = learn.predict(image)
return dict(zip(datasets.vocab, map(float, probability)))
classify_image(sample_image)
# # User Interface
# I'm going to make a simple Gradio interface that allows a user to upload a brain scan, and makes a prediction using the model.
import gradio as gr
recieved_image = gr.inputs.Image((256, 256))
label = gr.outputs.Label()
interface = gr.Interface(fn=classify_image, inputs=recieved_image, outputs=label)
interface.launch(inline=True)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/969/129969426.ipynb
|
brian-tumor-dataset
|
preetviradiya
|
[{"Id": 129969426, "ScriptId": 38614513, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15100726, "CreationDate": "05/17/2023 19:51:50", "VersionNumber": 5.0, "Title": "ResNet-18 Brain Tumour Classifier", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 77.0, "LinesInsertedFromPrevious": 4.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 73.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 186408383, "KernelVersionId": 129969426, "SourceDatasetVersionId": 2236708}]
|
[{"Id": 2236708, "DatasetId": 1343913, "DatasourceVersionId": 2278530, "CreatorUserId": 5456766, "LicenseName": "GPL 2", "CreationDate": "05/16/2021 10:20:25", "VersionNumber": 1.0, "Title": "Brian Tumor Dataset", "Slug": "brian-tumor-dataset", "Subtitle": "X-Ray images of Brain", "Description": "### Context\n\nThis dataset consists of the scanned images of brain of patient diagnosed of brain tumour.\n\n### Content\nSeparated files for train and test data with separating features and labels\n\n### Acknowledgements\nWe wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.\n\n### Inspiration\nYour data will be in front of the world's largest data science community. What questions do you want to see answered?", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1343913, "CreatorUserId": 5456766, "OwnerUserId": 5456766.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2236708.0, "CurrentDatasourceVersionId": 2278530.0, "ForumId": 1362909, "Type": 2, "CreationDate": "05/16/2021 10:20:25", "LastActivityDate": "05/16/2021", "TotalViews": 42814, "TotalDownloads": 5355, "TotalVotes": 87, "TotalKernels": 38}]
|
[{"Id": 5456766, "UserName": "preetviradiya", "DisplayName": "Preet Viradiya", "RegisterDate": "07/12/2020", "PerformanceTier": 2}]
|
# # Brain Tumour Classifier
# This image classifier was built as an experiment for lesson 2 of Fast.ai's ML course. I'm using it to learn the basics of their library on a meaningful dataset.
# # Initialising the Datablock and Dataloader
from fastai.data.all import *
from fastai.vision.all import *
# First I define a label function for the data loader. The possible categories are "Healthy" if the file name contains the string "Not Cancer", else it is labelled as "Tumour"
def is_healthy(file_name):
return file_name.startswith("Not") and not file_name == "Not Cancer (1).jpeg"
def label_y(x):
file_name = str(os.path.split(x)[1])
return "Healthy" if is_healthy(file_name) else "Tumour"
# Next, I create a datablock that takes images as x labels, and categories as y labels. I print the vocabulary created by the data block. The images are resized to be 256x256 and a random validation split is made.
path = "/kaggle/input/brian-tumor-dataset/Brain Tumor Data Set/Brain Tumor Data Set"
datablock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=label_y,
splitter=RandomSplitter(valid_pct=0.2),
item_tfms=Resize(256),
)
datasets = datablock.datasets(path)
print(f"Categories: {datasets.vocab}")
dataloaders = datablock.dataloaders(path)
# # Fine Tuning
# A vision learner based on ResNet-18 is defined and the model is fine tuned for 3 epochs.
learn = vision_learner(dataloaders, resnet18, metrics=error_rate)
learn.fine_tune(3)
learn.export("tumour_classifier_model.pkl")
# # Visualising Accuracy
# I plot a confusion matrix and display data from the top 15 losses that occured during validation
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
interp.plot_top_losses(15)
# # Making a Prediction
# I create a function to classify a sample image using the model I just trained. The probability of the image belonging to either category is displayed.
sample_image = PILImage.create(f"{path}/Healthy/Not Cancer (1).jpeg")
sample_image.thumbnail((192, 192))
sample_image
def classify_image(image):
prediction, _, probability = learn.predict(image)
return dict(zip(datasets.vocab, map(float, probability)))
classify_image(sample_image)
# # User Interface
# I'm going to make a simple Gradio interface that allows a user to upload a brain scan, and makes a prediction using the model.
import gradio as gr
recieved_image = gr.inputs.Image((256, 256))
label = gr.outputs.Label()
interface = gr.Interface(fn=classify_image, inputs=recieved_image, outputs=label)
interface.launch(inline=True)
| false | 0 | 775 | 1 | 845 | 775 |
||
129544832
|
<jupyter_start><jupyter_text>Aeroclub 2023
Kaggle dataset identifier: aeroclub-2023
<jupyter_script>import pandas as pd
data = pd.read_excel("/kaggle/input/aeroclub-2023/1/Задача №1/train_data.xlsx")
data.head()
data.describe()
data["title"].unique
data["title"]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/544/129544832.ipynb
|
aeroclub-2023
|
dimka11
|
[{"Id": 129544832, "ScriptId": 38520068, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9950472, "CreationDate": "05/14/2023 17:35:00", "VersionNumber": 1.0, "Title": "notebook96f66d702a", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 11.0, "LinesInsertedFromPrevious": 11.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185714643, "KernelVersionId": 129544832, "SourceDatasetVersionId": 5671957}]
|
[{"Id": 5671957, "DatasetId": 3260672, "DatasourceVersionId": 5747475, "CreatorUserId": 2778887, "LicenseName": "Unknown", "CreationDate": "05/12/2023 18:18:42", "VersionNumber": 1.0, "Title": "Aeroclub 2023", "Slug": "aeroclub-2023", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3260672, "CreatorUserId": 2778887, "OwnerUserId": 2778887.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5671957.0, "CurrentDatasourceVersionId": 5747475.0, "ForumId": 3326228, "Type": 2, "CreationDate": "05/12/2023 18:18:42", "LastActivityDate": "05/12/2023", "TotalViews": 47, "TotalDownloads": 3, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 2778887, "UserName": "dimka11", "DisplayName": "Dmitry Sokolov", "RegisterDate": "02/04/2019", "PerformanceTier": 1}]
|
import pandas as pd
data = pd.read_excel("/kaggle/input/aeroclub-2023/1/Задача №1/train_data.xlsx")
data.head()
data.describe()
data["title"].unique
data["title"]
| false | 0 | 66 | 0 | 98 | 66 |
||
129544878
|
import pandas as pd
import amp_pd_peptide
import numpy as np
import sklearn
import collections
import warnings
import polars as pl
from sklearn.model_selection import KFold, GroupKFold, StratifiedKFold
from catboost import CatBoostRegressor
from scipy.optimize import minimize
import joblib
warnings.simplefilter("ignore")
# ### Data load
train = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/train_clinical_data.csv"
)
extra = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/supplemental_clinical_data.csv"
)
train_pe = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/train_peptides.csv"
)
train_pr = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/train_proteins.csv"
)
# - **[important]** In the training set, users with UPDRS scores at 3/6/9/18/30/42/54 months tend to have higher scores, while users without scores during these months tend to have scores close to zero.
# - The same trend is observed in the test dataset as well, where users with values at 6/18 months are expected to have scores close to zero (at least according to the results of the leaderboard).
# train preprocess
print("before preprocess:", train.shape)
all_patient_ids = list(train["patient_id"].unique())
no_healty_users = list(
set(
train.filter((pl.col("visit_month").is_in([3, 6, 9, 18, 30, 42, 54]) == True))[
"patient_id"
]
)
)
healty_users = [i for i in all_patient_ids if i not in no_healty_users]
print(len(no_healty_users))
print(len(healty_users))
train_no_healthy = train.filter(pl.col("patient_id").is_in(no_healty_users)).to_pandas()
train_healthy = train.filter(pl.col("patient_id").is_in(healty_users)).to_pandas()
print("after preprocess:", train_no_healthy.shape, train_healthy.shape)
train_users = train_no_healthy["patient_id"].drop_duplicates()
train_pe = train_pe.filter(pl.col("patient_id").is_in(no_healty_users))
train_pr = train_pr.filter(pl.col("patient_id").is_in(no_healty_users))
# extra preprocess
extra_have_5 = list(extra.filter(pl.col("visit_month") == 5)["patient_id"])
extra_super_healty = list(
extra.filter((pl.col("visit_month") == 0) & (pl.col("updrs_1").is_null()))[
"patient_id"
]
)
extra_have_36 = list(extra.filter(pl.col("visit_month") == 36)["patient_id"])
# extra_unknown_list = list(set(extra_have_5 + extra_have_36 + extra_super_healty)) # LB best
extra_unknown_list = list(set(extra_have_5 + extra_super_healty)) # CV best
extra_have_5 = extra.filter(pl.col("patient_id").is_in(extra_have_5) == True)
extra_have_36 = extra.filter(pl.col("patient_id").is_in(extra_have_36) == True)
extra_super_healty = extra.filter(
pl.col("patient_id").is_in(extra_super_healty) == True
)
extra_no_healthy = extra.filter(
pl.col("patient_id").is_in(extra_unknown_list) == False
).to_pandas()
print(
len(extra_have_5),
len(extra_have_36),
len(extra_super_healty),
len(extra_no_healthy),
)
train_no_healthy_df = pd.concat([train_no_healthy, extra_no_healthy])
print(train_no_healthy_df.shape)
p_month_df = (
train_no_healthy_df.groupby("patient_id")["visit_month"].max().reset_index()
)
p_month_df.columns = ["patient_id", "max_visit_month"]
train_no_healthy_df = train_no_healthy_df.merge(p_month_df, on="patient_id", how="left")
# baseline
train_pr_pi = train_pr.pivot("NPX", "visit_id", "UniProt")
train_pe_pi = train_pe.pivot("PeptideAbundance", "visit_id", "Peptide")
train_pr_pi = train_pr_pi.to_pandas()
train_pe_pi = train_pe_pi.to_pandas()
train_pr_pe_base = train_pr_pi.merge(train_pe_pi, on="visit_id", how="left")
train_pr_pe_base["patient_id"] = (
train_pr_pe_base["visit_id"].apply(lambda x: x.split("_")[0]).astype(int)
)
train_pr_pe_base["visit_month"] = (
train_pr_pe_base["visit_id"].apply(lambda x: x.split("_")[1]).astype(int)
)
# feature importance top 50
cb_feature_dict = {
"updrs_1": [
"GEAGAPGEEDIQGPTK",
"WEAEPVYVQR",
"FIYGGC(UniMod_4)GGNR",
"P04275",
"FLPSYQAVEYMR",
"P04180",
"ITTTSPWMFPSR",
"Q06481",
"P07602",
"LSSWVLLM(UniMod_35)K",
"ASNLESGVPSR",
"KLSSWVLLMK",
"C(UniMod_4)C(UniMod_4)VEC(UniMod_4)PPC(UniMod_4)PAPPVAGPSVFLFPPKPK",
"FSVVYAK",
"LVGYLDR",
"QKWEAEPVYVQR",
"SSGLVSNAPGVQIR",
"NSPLDEENLTQENQDR",
"TVAAC(UniMod_4)NLPIVR",
"GKRPYQEGTPC(UniMod_4)SQC(UniMod_4)PSGYHC(UniMod_4)K",
"GVASLFAGR",
"Q14624",
"P19652",
"LHLDYIGPC(UniMod_4)K",
"RVDTVDPPYPR",
"GATLALTQVTPQDER",
"LMVELHNLYR",
"TGYYFDGISR",
"LEEQAQQIR",
"IVSSAM(UniMod_35)EPDREYHFGQAVR",
"WYEIEKIPTTFENGR",
"P01009",
"LADGGATNQGRVEIFYR",
"MNFRPGVLSSR",
"GNPEPTFSWTK",
"P01344",
"LRTEGDGVYTLNNEK",
"M(UniMod_35)LTPEHVFIHPGWK",
"TM(UniMod_35)LLQPAGSLGSYSYR",
"Q6UXB8",
"GRPGPQPWC(UniMod_4)ATTPNFDQDQR",
"P01594",
"SGIEC(UniMod_4)QLWR",
"TFISPIK",
"ESLQQMAEVTR",
"YFIDFVAR",
"LGMFNIQHC(UniMod_4)K",
"INENTGSVSVTR",
"SEYPSIK",
"LLDNWDSVTSTFSK",
],
"updrs_2": [
"LQDLYSIVR",
"P04180",
"P01861",
"O15240",
"Q6UXD5",
"P04433",
"QQETAAAETETR",
"P01717",
"P01857",
"TPC(UniMod_4)TVSC(UniMod_4)NIPVVSGKEC(UniMod_4)EEIIR",
"EGDMLTLFDGDGPSAR",
"NFPPSQDASGDLYTTSSQLTLPATQC(UniMod_4)PDGK",
"SSGLVSNAPGVQIR",
"GRPGPQPWC(UniMod_4)ATTPNFDQDQR",
"P02753",
"YWGVASFLQK",
"P01860",
"AKLEEQAQQIR",
"HLSLLTTLSNR",
"LLPAQLPAEKEVGPPLPQEAVPLQK",
"TLLSNLEEAKK",
"FSC(UniMod_4)MC(UniMod_4)PQGYQVVR",
"ALEQDLPVNIK",
"RLEGQEEEEDNRDSSMK",
"VHKEDDGVPVIC(UniMod_4)QVEHPAVTGNLQTQR",
"C(UniMod_4)LVEKGDVAFVKHQTVPQNTGGK",
"LSPEDYTLK",
"P10645",
"C(UniMod_4)TTPPPSSGPTYQC(UniMod_4)LK",
"SVIPSDGPSVAC(UniMod_4)VKK",
"DC(UniMod_4)HLAQVPSHTVVAR",
"LVFFAEDVGSNK",
"GGETSEMYLIQPDSSVKPYR",
"ILAGSADSEGVAAPR",
"EAEEETTNDNGVLVLEPARK",
"LLIYDASNR",
"IIGYTPDLDPETVDDAFAR",
"ITTTSPWMFPSR",
"VMPIC(UniMod_4)LPSKDYAEVGR",
"P07602",
"SC(UniMod_4)ESNSPFPVHPGTAEC(UniMod_4)C(UniMod_4)TK",
"YPGPQAEGDSEGLSQGLVDREK",
"IEEELGDEAR",
"P07711",
"GEAGAPGEEDIQGPTK",
"QHM(UniMod_35)DSDSSPSSSSTYC(UniMod_4)NQMMR",
"P04406",
"Q99829",
"C(UniMod_4)C(UniMod_4)VEC(UniMod_4)PPC(UniMod_4)PAPPVAGPSVFLFPPKPK",
"VIAVNEVGR",
],
"updrs_3": [
"NPDSSTTGPWC(UniMod_4)YTTDPTVR",
"P00738",
"IYISGMAPRPSLAK",
"KAADDTWEPFASGK",
"FFLC(UniMod_4)QVAGDAK",
"P01717",
"EHVAHLLFLR",
"IWDVVEK",
"DQC(UniMod_4)QVDSQC(UniMod_4)PGQMK",
"RGYQLSDVDGVTC(UniMod_4)EDIDEC(UniMod_4)ALPTGGHIC(UniMod_4)SYR",
"WYEIEKIPTTFENGR",
"GQSISVTSIRPC(UniMod_4)AAETQ",
"P01877",
"DLATVYVDVLK",
"KPQSAVYSTGSNGILLC(UniMod_4)EAEGEPQPTIK",
"EGDMLTLFDGDGPSAR",
"GRPGPQPWC(UniMod_4)ATTPNFDQDQR",
"RPGGEPSPEGTTGQSYNQYSQR",
"VMTPAVYAPYDVK",
"YWGVASFLQK",
"GKRPYQEGTPC(UniMod_4)SQC(UniMod_4)PSGYHC(UniMod_4)K",
"VNGSPVDNHPFAGDVVFPR",
"VIAVNEVGR",
"O00533",
"VMPIC(UniMod_4)LPSKDYAEVGR",
"P12109",
"P04004",
"IEIPSSVQQVPTIIK",
"C(UniMod_4)YTAVVPLVYGGETK",
"TLKIENVSYQDKGNYR",
"Q6UXD5",
"RVDTVDPPYPR",
"VVVNFAPTIQEIK",
"QQETAAAETETR",
"GLEFLSVPSTYYK",
"HQPQEFPTYVEPTNDEIC(UniMod_4)EAFRK",
"LQDLYSIVR",
"P01591",
"P00748",
"P02753",
"YGQTIRPIC(UniMod_4)LPC(UniMod_4)TEGTTR",
"HQPQEFPTYVEPTNDEIC(UniMod_4)EAFRKDPK",
"LVFFAEDVGSNK",
"SYELTQPPSVSVSPGQTAR",
"AYQGVAAPFPK",
"VASYGVKPR",
"FKDLGEENFK",
"P00441",
"YPGPQAEGDSEGLSQGLVDREK",
"HQPQEFPTYVEPTNDEIC(UniMod_4)EAFR",
],
}
cb_pr_params = {
"iterations": 100000,
"early_stopping_rounds": 30,
"depth": 2,
"learning_rate": 0.03,
"loss_function": "MAE",
"eval_metric": "MAE",
"random_seed": 1208,
"min_child_samples": 30,
"subsample": 0.8,
"verbose": 0,
"l2_leaf_reg": 5,
}
def smape_plus_1(y_true, y_pred):
y_true_plus_1 = y_true + 1
y_pred_plus_1 = y_pred + 1
metric = np.zeros(len(y_true_plus_1))
numerator = np.abs(y_true_plus_1 - y_pred_plus_1)
denominator = (np.abs(y_true_plus_1) + np.abs(y_pred_plus_1)) / 2
mask_not_zeros = (y_true_plus_1 != 0) | (y_pred_plus_1 != 0)
metric[mask_not_zeros] = numerator[mask_not_zeros] / denominator[mask_not_zeros]
return 100 * np.nanmean(metric)
def pr_model(
train, valid, cb_params, target_columns, cb_feature_dict, fold, save_model
):
valid_df = valid[["patient_id", "visit_month", target_columns]]
# cb
model_cb = CatBoostRegressor(**cb_params)
model_cb.fit(
train[cb_feature_dict[target_columns]],
train[target_columns],
eval_set=(valid[cb_feature_dict[target_columns]], valid[target_columns]),
)
if save_model == True:
joblib.dump(model_cb, f"model_cb_{target_columns}_{fold}.pkl")
joblib.dump(
cb_feature_dict[target_columns], f"cb_use_features_{target_columns}.pkl"
)
pred = model_cb.predict(valid[cb_feature_dict[target_columns]])
valid_df["oof_cb"] = pred
return valid_df
# ### Make Model
kf = KFold(n_splits=10)
all_score_list = []
null_fill = True
model_ratio = 1
for target_columns in ["updrs_1", "updrs_2", "updrs_3"]:
score_list = []
for f, (idx_tr, idx_va) in enumerate(kf.split(train_users)):
tr_users = list(train_users.iloc[idx_tr])
va_users = list(train_users.iloc[idx_va])
train_no_healthy_fold_tr_df = train_no_healthy_df[
train_no_healthy_df["patient_id"].isin(va_users) == False
]
train_no_healthy_fold_val_df = train_no_healthy_df[
train_no_healthy_df["patient_id"].isin(va_users) == True
]
# model df
train_no_healthy_fold_tr_model_df = train_no_healthy_fold_tr_df.merge(
train_pr_pe_base, on=["patient_id", "visit_month"], how="inner"
)
train_no_healthy_fold_val_model_df = train_no_healthy_fold_val_df.merge(
train_pr_pe_base, on=["patient_id", "visit_month"], how="inner"
)
train_no_healthy_fold_tr_model_df = train_no_healthy_fold_tr_model_df[
train_no_healthy_fold_tr_model_df[target_columns].isnull() == False
]
train_no_healthy_fold_val_model_df = train_no_healthy_fold_val_model_df[
train_no_healthy_fold_val_model_df[target_columns].isnull() == False
]
pf_model_valid_df = pr_model(
train_no_healthy_fold_tr_model_df,
train_no_healthy_fold_val_model_df,
cb_pr_params,
target_columns,
cb_feature_dict,
f,
save_model=True,
)
val_df = train_no_healthy_fold_val_df[
["patient_id", "visit_month", target_columns]
].merge(
pf_model_valid_df[["patient_id", "visit_month", "oof_cb"]],
on=["patient_id", "visit_month"],
how="left",
)
# CV的にはupdrs_3は無い方が良い
if null_fill == True:
val_df = val_df.groupby("patient_id").fillna(method="ffill")
val_df["oof_cb"] = np.where(
val_df["oof_cb"].isnull(), val_df[target_columns], val_df["oof_cb"]
)
val_df[target_columns] = (val_df["oof_cb"] * model_ratio) + (
val_df[target_columns] * (1 - model_ratio)
)
validation_pred = np.round(val_df[target_columns].values.ravel())
validation_target = train_no_healthy_fold_val_df[target_columns].values.ravel()
score = smape_plus_1(validation_target, validation_pred)
score_list.append(score)
target_score = np.array(score_list).mean()
print(target_columns, target_score)
all_score_list.append(target_score)
total_score = np.array(all_score_list).mean()
print("total:", total_score)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/544/129544878.ipynb
| null | null |
[{"Id": 129544878, "ScriptId": 38511692, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 397595, "CreationDate": "05/14/2023 17:35:30", "VersionNumber": 3.0, "Title": "amp_catboost_protein_model", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 322.0, "LinesInsertedFromPrevious": 5.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 317.0, "LinesInsertedFromFork": 46.0, "LinesDeletedFromFork": 462.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 276.0, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import amp_pd_peptide
import numpy as np
import sklearn
import collections
import warnings
import polars as pl
from sklearn.model_selection import KFold, GroupKFold, StratifiedKFold
from catboost import CatBoostRegressor
from scipy.optimize import minimize
import joblib
warnings.simplefilter("ignore")
# ### Data load
train = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/train_clinical_data.csv"
)
extra = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/supplemental_clinical_data.csv"
)
train_pe = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/train_peptides.csv"
)
train_pr = pl.read_csv(
"/kaggle/input/amp-parkinsons-disease-progression-prediction/train_proteins.csv"
)
# - **[important]** In the training set, users with UPDRS scores at 3/6/9/18/30/42/54 months tend to have higher scores, while users without scores during these months tend to have scores close to zero.
# - The same trend is observed in the test dataset as well, where users with values at 6/18 months are expected to have scores close to zero (at least according to the results of the leaderboard).
# train preprocess
print("before preprocess:", train.shape)
all_patient_ids = list(train["patient_id"].unique())
no_healty_users = list(
set(
train.filter((pl.col("visit_month").is_in([3, 6, 9, 18, 30, 42, 54]) == True))[
"patient_id"
]
)
)
healty_users = [i for i in all_patient_ids if i not in no_healty_users]
print(len(no_healty_users))
print(len(healty_users))
train_no_healthy = train.filter(pl.col("patient_id").is_in(no_healty_users)).to_pandas()
train_healthy = train.filter(pl.col("patient_id").is_in(healty_users)).to_pandas()
print("after preprocess:", train_no_healthy.shape, train_healthy.shape)
train_users = train_no_healthy["patient_id"].drop_duplicates()
train_pe = train_pe.filter(pl.col("patient_id").is_in(no_healty_users))
train_pr = train_pr.filter(pl.col("patient_id").is_in(no_healty_users))
# extra preprocess
extra_have_5 = list(extra.filter(pl.col("visit_month") == 5)["patient_id"])
extra_super_healty = list(
extra.filter((pl.col("visit_month") == 0) & (pl.col("updrs_1").is_null()))[
"patient_id"
]
)
extra_have_36 = list(extra.filter(pl.col("visit_month") == 36)["patient_id"])
# extra_unknown_list = list(set(extra_have_5 + extra_have_36 + extra_super_healty)) # LB best
extra_unknown_list = list(set(extra_have_5 + extra_super_healty)) # CV best
extra_have_5 = extra.filter(pl.col("patient_id").is_in(extra_have_5) == True)
extra_have_36 = extra.filter(pl.col("patient_id").is_in(extra_have_36) == True)
extra_super_healty = extra.filter(
pl.col("patient_id").is_in(extra_super_healty) == True
)
extra_no_healthy = extra.filter(
pl.col("patient_id").is_in(extra_unknown_list) == False
).to_pandas()
print(
len(extra_have_5),
len(extra_have_36),
len(extra_super_healty),
len(extra_no_healthy),
)
train_no_healthy_df = pd.concat([train_no_healthy, extra_no_healthy])
print(train_no_healthy_df.shape)
p_month_df = (
train_no_healthy_df.groupby("patient_id")["visit_month"].max().reset_index()
)
p_month_df.columns = ["patient_id", "max_visit_month"]
train_no_healthy_df = train_no_healthy_df.merge(p_month_df, on="patient_id", how="left")
# baseline
train_pr_pi = train_pr.pivot("NPX", "visit_id", "UniProt")
train_pe_pi = train_pe.pivot("PeptideAbundance", "visit_id", "Peptide")
train_pr_pi = train_pr_pi.to_pandas()
train_pe_pi = train_pe_pi.to_pandas()
train_pr_pe_base = train_pr_pi.merge(train_pe_pi, on="visit_id", how="left")
train_pr_pe_base["patient_id"] = (
train_pr_pe_base["visit_id"].apply(lambda x: x.split("_")[0]).astype(int)
)
train_pr_pe_base["visit_month"] = (
train_pr_pe_base["visit_id"].apply(lambda x: x.split("_")[1]).astype(int)
)
# feature importance top 50
cb_feature_dict = {
"updrs_1": [
"GEAGAPGEEDIQGPTK",
"WEAEPVYVQR",
"FIYGGC(UniMod_4)GGNR",
"P04275",
"FLPSYQAVEYMR",
"P04180",
"ITTTSPWMFPSR",
"Q06481",
"P07602",
"LSSWVLLM(UniMod_35)K",
"ASNLESGVPSR",
"KLSSWVLLMK",
"C(UniMod_4)C(UniMod_4)VEC(UniMod_4)PPC(UniMod_4)PAPPVAGPSVFLFPPKPK",
"FSVVYAK",
"LVGYLDR",
"QKWEAEPVYVQR",
"SSGLVSNAPGVQIR",
"NSPLDEENLTQENQDR",
"TVAAC(UniMod_4)NLPIVR",
"GKRPYQEGTPC(UniMod_4)SQC(UniMod_4)PSGYHC(UniMod_4)K",
"GVASLFAGR",
"Q14624",
"P19652",
"LHLDYIGPC(UniMod_4)K",
"RVDTVDPPYPR",
"GATLALTQVTPQDER",
"LMVELHNLYR",
"TGYYFDGISR",
"LEEQAQQIR",
"IVSSAM(UniMod_35)EPDREYHFGQAVR",
"WYEIEKIPTTFENGR",
"P01009",
"LADGGATNQGRVEIFYR",
"MNFRPGVLSSR",
"GNPEPTFSWTK",
"P01344",
"LRTEGDGVYTLNNEK",
"M(UniMod_35)LTPEHVFIHPGWK",
"TM(UniMod_35)LLQPAGSLGSYSYR",
"Q6UXB8",
"GRPGPQPWC(UniMod_4)ATTPNFDQDQR",
"P01594",
"SGIEC(UniMod_4)QLWR",
"TFISPIK",
"ESLQQMAEVTR",
"YFIDFVAR",
"LGMFNIQHC(UniMod_4)K",
"INENTGSVSVTR",
"SEYPSIK",
"LLDNWDSVTSTFSK",
],
"updrs_2": [
"LQDLYSIVR",
"P04180",
"P01861",
"O15240",
"Q6UXD5",
"P04433",
"QQETAAAETETR",
"P01717",
"P01857",
"TPC(UniMod_4)TVSC(UniMod_4)NIPVVSGKEC(UniMod_4)EEIIR",
"EGDMLTLFDGDGPSAR",
"NFPPSQDASGDLYTTSSQLTLPATQC(UniMod_4)PDGK",
"SSGLVSNAPGVQIR",
"GRPGPQPWC(UniMod_4)ATTPNFDQDQR",
"P02753",
"YWGVASFLQK",
"P01860",
"AKLEEQAQQIR",
"HLSLLTTLSNR",
"LLPAQLPAEKEVGPPLPQEAVPLQK",
"TLLSNLEEAKK",
"FSC(UniMod_4)MC(UniMod_4)PQGYQVVR",
"ALEQDLPVNIK",
"RLEGQEEEEDNRDSSMK",
"VHKEDDGVPVIC(UniMod_4)QVEHPAVTGNLQTQR",
"C(UniMod_4)LVEKGDVAFVKHQTVPQNTGGK",
"LSPEDYTLK",
"P10645",
"C(UniMod_4)TTPPPSSGPTYQC(UniMod_4)LK",
"SVIPSDGPSVAC(UniMod_4)VKK",
"DC(UniMod_4)HLAQVPSHTVVAR",
"LVFFAEDVGSNK",
"GGETSEMYLIQPDSSVKPYR",
"ILAGSADSEGVAAPR",
"EAEEETTNDNGVLVLEPARK",
"LLIYDASNR",
"IIGYTPDLDPETVDDAFAR",
"ITTTSPWMFPSR",
"VMPIC(UniMod_4)LPSKDYAEVGR",
"P07602",
"SC(UniMod_4)ESNSPFPVHPGTAEC(UniMod_4)C(UniMod_4)TK",
"YPGPQAEGDSEGLSQGLVDREK",
"IEEELGDEAR",
"P07711",
"GEAGAPGEEDIQGPTK",
"QHM(UniMod_35)DSDSSPSSSSTYC(UniMod_4)NQMMR",
"P04406",
"Q99829",
"C(UniMod_4)C(UniMod_4)VEC(UniMod_4)PPC(UniMod_4)PAPPVAGPSVFLFPPKPK",
"VIAVNEVGR",
],
"updrs_3": [
"NPDSSTTGPWC(UniMod_4)YTTDPTVR",
"P00738",
"IYISGMAPRPSLAK",
"KAADDTWEPFASGK",
"FFLC(UniMod_4)QVAGDAK",
"P01717",
"EHVAHLLFLR",
"IWDVVEK",
"DQC(UniMod_4)QVDSQC(UniMod_4)PGQMK",
"RGYQLSDVDGVTC(UniMod_4)EDIDEC(UniMod_4)ALPTGGHIC(UniMod_4)SYR",
"WYEIEKIPTTFENGR",
"GQSISVTSIRPC(UniMod_4)AAETQ",
"P01877",
"DLATVYVDVLK",
"KPQSAVYSTGSNGILLC(UniMod_4)EAEGEPQPTIK",
"EGDMLTLFDGDGPSAR",
"GRPGPQPWC(UniMod_4)ATTPNFDQDQR",
"RPGGEPSPEGTTGQSYNQYSQR",
"VMTPAVYAPYDVK",
"YWGVASFLQK",
"GKRPYQEGTPC(UniMod_4)SQC(UniMod_4)PSGYHC(UniMod_4)K",
"VNGSPVDNHPFAGDVVFPR",
"VIAVNEVGR",
"O00533",
"VMPIC(UniMod_4)LPSKDYAEVGR",
"P12109",
"P04004",
"IEIPSSVQQVPTIIK",
"C(UniMod_4)YTAVVPLVYGGETK",
"TLKIENVSYQDKGNYR",
"Q6UXD5",
"RVDTVDPPYPR",
"VVVNFAPTIQEIK",
"QQETAAAETETR",
"GLEFLSVPSTYYK",
"HQPQEFPTYVEPTNDEIC(UniMod_4)EAFRK",
"LQDLYSIVR",
"P01591",
"P00748",
"P02753",
"YGQTIRPIC(UniMod_4)LPC(UniMod_4)TEGTTR",
"HQPQEFPTYVEPTNDEIC(UniMod_4)EAFRKDPK",
"LVFFAEDVGSNK",
"SYELTQPPSVSVSPGQTAR",
"AYQGVAAPFPK",
"VASYGVKPR",
"FKDLGEENFK",
"P00441",
"YPGPQAEGDSEGLSQGLVDREK",
"HQPQEFPTYVEPTNDEIC(UniMod_4)EAFR",
],
}
cb_pr_params = {
"iterations": 100000,
"early_stopping_rounds": 30,
"depth": 2,
"learning_rate": 0.03,
"loss_function": "MAE",
"eval_metric": "MAE",
"random_seed": 1208,
"min_child_samples": 30,
"subsample": 0.8,
"verbose": 0,
"l2_leaf_reg": 5,
}
def smape_plus_1(y_true, y_pred):
y_true_plus_1 = y_true + 1
y_pred_plus_1 = y_pred + 1
metric = np.zeros(len(y_true_plus_1))
numerator = np.abs(y_true_plus_1 - y_pred_plus_1)
denominator = (np.abs(y_true_plus_1) + np.abs(y_pred_plus_1)) / 2
mask_not_zeros = (y_true_plus_1 != 0) | (y_pred_plus_1 != 0)
metric[mask_not_zeros] = numerator[mask_not_zeros] / denominator[mask_not_zeros]
return 100 * np.nanmean(metric)
def pr_model(
train, valid, cb_params, target_columns, cb_feature_dict, fold, save_model
):
valid_df = valid[["patient_id", "visit_month", target_columns]]
# cb
model_cb = CatBoostRegressor(**cb_params)
model_cb.fit(
train[cb_feature_dict[target_columns]],
train[target_columns],
eval_set=(valid[cb_feature_dict[target_columns]], valid[target_columns]),
)
if save_model == True:
joblib.dump(model_cb, f"model_cb_{target_columns}_{fold}.pkl")
joblib.dump(
cb_feature_dict[target_columns], f"cb_use_features_{target_columns}.pkl"
)
pred = model_cb.predict(valid[cb_feature_dict[target_columns]])
valid_df["oof_cb"] = pred
return valid_df
# ### Make Model
kf = KFold(n_splits=10)
all_score_list = []
null_fill = True
model_ratio = 1
for target_columns in ["updrs_1", "updrs_2", "updrs_3"]:
score_list = []
for f, (idx_tr, idx_va) in enumerate(kf.split(train_users)):
tr_users = list(train_users.iloc[idx_tr])
va_users = list(train_users.iloc[idx_va])
train_no_healthy_fold_tr_df = train_no_healthy_df[
train_no_healthy_df["patient_id"].isin(va_users) == False
]
train_no_healthy_fold_val_df = train_no_healthy_df[
train_no_healthy_df["patient_id"].isin(va_users) == True
]
# model df
train_no_healthy_fold_tr_model_df = train_no_healthy_fold_tr_df.merge(
train_pr_pe_base, on=["patient_id", "visit_month"], how="inner"
)
train_no_healthy_fold_val_model_df = train_no_healthy_fold_val_df.merge(
train_pr_pe_base, on=["patient_id", "visit_month"], how="inner"
)
train_no_healthy_fold_tr_model_df = train_no_healthy_fold_tr_model_df[
train_no_healthy_fold_tr_model_df[target_columns].isnull() == False
]
train_no_healthy_fold_val_model_df = train_no_healthy_fold_val_model_df[
train_no_healthy_fold_val_model_df[target_columns].isnull() == False
]
pf_model_valid_df = pr_model(
train_no_healthy_fold_tr_model_df,
train_no_healthy_fold_val_model_df,
cb_pr_params,
target_columns,
cb_feature_dict,
f,
save_model=True,
)
val_df = train_no_healthy_fold_val_df[
["patient_id", "visit_month", target_columns]
].merge(
pf_model_valid_df[["patient_id", "visit_month", "oof_cb"]],
on=["patient_id", "visit_month"],
how="left",
)
# CV的にはupdrs_3は無い方が良い
if null_fill == True:
val_df = val_df.groupby("patient_id").fillna(method="ffill")
val_df["oof_cb"] = np.where(
val_df["oof_cb"].isnull(), val_df[target_columns], val_df["oof_cb"]
)
val_df[target_columns] = (val_df["oof_cb"] * model_ratio) + (
val_df[target_columns] * (1 - model_ratio)
)
validation_pred = np.round(val_df[target_columns].values.ravel())
validation_target = train_no_healthy_fold_val_df[target_columns].values.ravel()
score = smape_plus_1(validation_target, validation_pred)
score_list.append(score)
target_score = np.array(score_list).mean()
print(target_columns, target_score)
all_score_list.append(target_score)
total_score = np.array(all_score_list).mean()
print("total:", total_score)
| false | 0 | 4,813 | 0 | 4,813 | 4,813 |
||
129864806
|
<jupyter_start><jupyter_text>DC comics
The DC Universe (DCU) is the fictional shared universe where most stories in American comic book titles published by DC Comics take place. DC superheroes such as Superman, Batman, Wonder Woman, Martian Manhunter, The Flash, Green Lantern, and Aquaman are from this universe, as well as teams such as the Justice League and the Teen Titans. It also contains well-known supervillains such as Lex Luthor, the Joker, Sinestro, Harley Quinn, Reverse-Flash, Darkseid, General Zod, Penguin, the Riddler, Catwoman, Ra’s al Ghul, Bane, and Two-Face. In context, the term "DC Universe" usually refers to the main DC continuity. The main DC Universe, as well as the alternate realities related to it, were quickly adapted to other media such as film serials or radio dramas. In subsequent decades, the continuity between all of these media became increasingly complex with certain storylines and events designed to simplify or streamline the more confusing aspects of characters' histories. The basic concept of the DC Universe is that it is just like the real world, but with superheroes and supervillains existing in it. However, there are other corollary differences resulting from the justifications implied by that main concept. Many fictional countries, such as Qurac, Vlatava, and Zandia, exist in it. Though stories are often set in the United States of America, they are as often as not set in fictional cities, such as Gotham City or Metropolis. These cities are effectively archetypes of cities, with Gotham City embodying more of the negative aspects of life in a large city, and Metropolis reflecting more of the positive aspects. Sentient alien species (such as Kryptonians and Thanagarians) and even functioning interstellar societies are generally known to exist, and the arrival of alien spacecraft is not uncommon. Technologies which are only theoretical in the real world, such as artificial intelligence, or are outright impossible according to modern science, such as faster-than-light travel, are functional and reproducible, though they are often portrayed as highly experimental and difficult to achieve. Demonstrable magic exists and can be learned. The general history of the fictional world is similar to the real one (for instance, there was a Roman Empire, and World War II and 9/11 both occurred), but many fantastic additions exist, such as the known existence of Atlantis. In recent years, stories have increasingly described events which bring the DC Universe farther away from reality, such as World War III occurring, Lex Luthor being elected as President of the United States in 2000, and entire cities and countries being destroyed. There are other minor variations, such as the Earth being slightly larger than ours (to accommodate the extra countries), and the planet Saturn having 18 moons rather than 19 because Superman destroyed one.
This data set consists of ten columns. The columns are :
Page_id - An ID assigned to each record
Name- Name of the DC character
URL - It consists of the Wikipedia link of the information regarding each character
ID- The district from which the patient belongs
Align - Information whether the character is a good or a bad character in the DC universe
Eye - Eye color of each character
Hair- Hair color of each character
Sex- Gender of the character
Alive - Information whether the character is alive or deceased
Appearances - The total number of appearances of the character in the DC universe
Would love to see your analysis and insights from this data set using various algorithms. Kindly let me know about your work from this data set via email. Feel free to connect on LinkedIn.
email : [email protected]
LinkedIn: https://www.linkedin.com/in/aruna-s-/
Kaggle dataset identifier: dc-comics
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
# Read the CSV file into a pandas DataFrame
df = pd.read_csv("/kaggle/input/dc-comics/dc-comics.csv")
df.head()
df.info()
df.describe()
# Remove unnecessary columns
columns_to_drop = ["page_id", "urlslug"]
df.drop(columns=columns_to_drop, inplace=True)
# Clean missing values
df.dropna(inplace=True)
# Generate a unique identifier for each row
df["Row_ID"] = range(1, len(df) + 1)
df["Row_ID"] = df["Row_ID"].astype(str)
# Convert string columns to lowercase for consistency
string_columns = ["name", "ALIGN", "EYE", "HAIR", "SEX", "ALIVE"]
df[string_columns] = df[string_columns].apply(lambda x: x.str.lower())
# Remove leading/trailing whitespaces in string columns
df[string_columns] = df[string_columns].apply(lambda x: x.str.strip())
# Save the cleaned dataset to a new CSV file
df.to_csv("cleaned_superheroes_data.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/864/129864806.ipynb
|
dc-comics
|
arunasivapragasam
|
[{"Id": 129864806, "ScriptId": 38624924, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11318851, "CreationDate": "05/17/2023 04:02:35", "VersionNumber": 1.0, "Title": "Data Cleaning of DC comics dataset", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 54.0, "LinesInsertedFromPrevious": 54.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186262787, "KernelVersionId": 129864806, "SourceDatasetVersionId": 3124482}]
|
[{"Id": 3124482, "DatasetId": 1903303, "DatasourceVersionId": 3173505, "CreatorUserId": 5820768, "LicenseName": "Unknown", "CreationDate": "02/01/2022 13:13:19", "VersionNumber": 3.0, "Title": "DC comics", "Slug": "dc-comics", "Subtitle": "Dataset of DC universe characters", "Description": "The DC Universe (DCU) is the fictional shared universe where most stories in American comic book titles published by DC Comics take place. DC superheroes such as Superman, Batman, Wonder Woman, Martian Manhunter, The Flash, Green Lantern, and Aquaman are from this universe, as well as teams such as the Justice League and the Teen Titans. It also contains well-known supervillains such as Lex Luthor, the Joker, Sinestro, Harley Quinn, Reverse-Flash, Darkseid, General Zod, Penguin, the Riddler, Catwoman, Ra\u2019s al Ghul, Bane, and Two-Face. In context, the term \"DC Universe\" usually refers to the main DC continuity. The main DC Universe, as well as the alternate realities related to it, were quickly adapted to other media such as film serials or radio dramas. In subsequent decades, the continuity between all of these media became increasingly complex with certain storylines and events designed to simplify or streamline the more confusing aspects of characters' histories. The basic concept of the DC Universe is that it is just like the real world, but with superheroes and supervillains existing in it. However, there are other corollary differences resulting from the justifications implied by that main concept. Many fictional countries, such as Qurac, Vlatava, and Zandia, exist in it. Though stories are often set in the United States of America, they are as often as not set in fictional cities, such as Gotham City or Metropolis. These cities are effectively archetypes of cities, with Gotham City embodying more of the negative aspects of life in a large city, and Metropolis reflecting more of the positive aspects. Sentient alien species (such as Kryptonians and Thanagarians) and even functioning interstellar societies are generally known to exist, and the arrival of alien spacecraft is not uncommon. Technologies which are only theoretical in the real world, such as artificial intelligence, or are outright impossible according to modern science, such as faster-than-light travel, are functional and reproducible, though they are often portrayed as highly experimental and difficult to achieve. Demonstrable magic exists and can be learned. The general history of the fictional world is similar to the real one (for instance, there was a Roman Empire, and World War II and 9/11 both occurred), but many fantastic additions exist, such as the known existence of Atlantis. In recent years, stories have increasingly described events which bring the DC Universe farther away from reality, such as World War III occurring, Lex Luthor being elected as President of the United States in 2000, and entire cities and countries being destroyed. There are other minor variations, such as the Earth being slightly larger than ours (to accommodate the extra countries), and the planet Saturn having 18 moons rather than 19 because Superman destroyed one.\n\n\n\n\n\nThis data set consists of ten columns. The columns are :\n\nPage_id - An ID assigned to each record \nName- Name of the DC character \nURL - It consists of the Wikipedia link of the information regarding each character\nID- The district from which the patient belongs\nAlign - Information whether the character is a good or a bad character in the DC universe\nEye - Eye color of each character\nHair- Hair color of each character\nSex- Gender of the character\nAlive - Information whether the character is alive or deceased\nAppearances - The total number of appearances of the character in the DC universe\n\nWould love to see your analysis and insights from this data set using various algorithms. Kindly let me know about your work from this data set via email. Feel free to connect on LinkedIn.\n\nemail : [email protected]\nLinkedIn: https://www.linkedin.com/in/aruna-s-/", "VersionNotes": "Data Update 2022/02/01", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1903303, "CreatorUserId": 5820768, "OwnerUserId": 5820768.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3124482.0, "CurrentDatasourceVersionId": 3173505.0, "ForumId": 1926686, "Type": 2, "CreationDate": "01/31/2022 16:26:21", "LastActivityDate": "01/31/2022", "TotalViews": 1780, "TotalDownloads": 183, "TotalVotes": 22, "TotalKernels": 1}]
|
[{"Id": 5820768, "UserName": "arunasivapragasam", "DisplayName": "Aruna S", "RegisterDate": "09/21/2020", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
# Read the CSV file into a pandas DataFrame
df = pd.read_csv("/kaggle/input/dc-comics/dc-comics.csv")
df.head()
df.info()
df.describe()
# Remove unnecessary columns
columns_to_drop = ["page_id", "urlslug"]
df.drop(columns=columns_to_drop, inplace=True)
# Clean missing values
df.dropna(inplace=True)
# Generate a unique identifier for each row
df["Row_ID"] = range(1, len(df) + 1)
df["Row_ID"] = df["Row_ID"].astype(str)
# Convert string columns to lowercase for consistency
string_columns = ["name", "ALIGN", "EYE", "HAIR", "SEX", "ALIVE"]
df[string_columns] = df[string_columns].apply(lambda x: x.str.lower())
# Remove leading/trailing whitespaces in string columns
df[string_columns] = df[string_columns].apply(lambda x: x.str.strip())
# Save the cleaned dataset to a new CSV file
df.to_csv("cleaned_superheroes_data.csv", index=False)
| false | 1 | 472 | 0 | 1,396 | 472 |
||
129987615
|
<jupyter_start><jupyter_text>IBM HR Analytics Employee Attrition & Performance
Uncover the factors that lead to employee attrition and explore important questions such as ‘show me a breakdown of distance from home by job role and attrition’ or ‘compare average monthly income by education and attrition’. This is a fictional data set created by IBM data scientists.
Education
1 'Below College'
2 'College'
3 'Bachelor'
4 'Master'
5 'Doctor'
EnvironmentSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
JobInvolvement
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
JobSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
PerformanceRating
1 'Low'
2 'Good'
3 'Excellent'
4 'Outstanding'
RelationshipSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
WorkLifeBalance
1 'Bad'
2 'Good'
3 'Better'
4 'Best'
Kaggle dataset identifier: ibm-hr-analytics-attrition-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 1470 entries, 0 to 1469
Data columns (total 35 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 1470 non-null int64
1 Attrition 1470 non-null object
2 BusinessTravel 1470 non-null object
3 DailyRate 1470 non-null int64
4 Department 1470 non-null object
5 DistanceFromHome 1470 non-null int64
6 Education 1470 non-null int64
7 EducationField 1470 non-null object
8 EmployeeCount 1470 non-null int64
9 EmployeeNumber 1470 non-null int64
10 EnvironmentSatisfaction 1470 non-null int64
11 Gender 1470 non-null object
12 HourlyRate 1470 non-null int64
13 JobInvolvement 1470 non-null int64
14 JobLevel 1470 non-null int64
15 JobRole 1470 non-null object
16 JobSatisfaction 1470 non-null int64
17 MaritalStatus 1470 non-null object
18 MonthlyIncome 1470 non-null int64
19 MonthlyRate 1470 non-null int64
20 NumCompaniesWorked 1470 non-null int64
21 Over18 1470 non-null object
22 OverTime 1470 non-null object
23 PercentSalaryHike 1470 non-null int64
24 PerformanceRating 1470 non-null int64
25 RelationshipSatisfaction 1470 non-null int64
26 StandardHours 1470 non-null int64
27 StockOptionLevel 1470 non-null int64
28 TotalWorkingYears 1470 non-null int64
29 TrainingTimesLastYear 1470 non-null int64
30 WorkLifeBalance 1470 non-null int64
31 YearsAtCompany 1470 non-null int64
32 YearsInCurrentRole 1470 non-null int64
33 YearsSinceLastPromotion 1470 non-null int64
34 YearsWithCurrManager 1470 non-null int64
dtypes: int64(26), object(9)
memory usage: 402.1+ KB
<jupyter_text>Examples:
{
"Age": 41,
"Attrition": "Yes",
"BusinessTravel": "Travel_Rarely",
"DailyRate": 1102,
"Department": "Sales",
"DistanceFromHome": 1,
"Education": 2,
"EducationField": "Life Sciences",
"EmployeeCount": 1,
"EmployeeNumber": 1,
"EnvironmentSatisfaction": 2,
"Gender": "Female",
"HourlyRate": 94,
"JobInvolvement": 3,
"JobLevel": 2,
"JobRole": "Sales Executive",
"JobSatisfaction": 4,
"MaritalStatus": "Single",
"MonthlyIncome": 5993,
"MonthlyRate": 19479,
"...": "and 15 more columns"
}
{
"Age": 49,
"Attrition": "No",
"BusinessTravel": "Travel_Frequently",
"DailyRate": 279,
"Department": "Research & Development",
"DistanceFromHome": 8,
"Education": 1,
"EducationField": "Life Sciences",
"EmployeeCount": 1,
"EmployeeNumber": 2,
"EnvironmentSatisfaction": 3,
"Gender": "Male",
"HourlyRate": 61,
"JobInvolvement": 2,
"JobLevel": 2,
"JobRole": "Research Scientist",
"JobSatisfaction": 2,
"MaritalStatus": "Married",
"MonthlyIncome": 5130,
"MonthlyRate": 24907,
"...": "and 15 more columns"
}
{
"Age": 37,
"Attrition": "Yes",
"BusinessTravel": "Travel_Rarely",
"DailyRate": 1373,
"Department": "Research & Development",
"DistanceFromHome": 2,
"Education": 2,
"EducationField": "Other",
"EmployeeCount": 1,
"EmployeeNumber": 4,
"EnvironmentSatisfaction": 4,
"Gender": "Male",
"HourlyRate": 92,
"JobInvolvement": 2,
"JobLevel": 1,
"JobRole": "Laboratory Technician",
"JobSatisfaction": 3,
"MaritalStatus": "Single",
"MonthlyIncome": 2090,
"MonthlyRate": 2396,
"...": "and 15 more columns"
}
{
"Age": 33,
"Attrition": "No",
"BusinessTravel": "Travel_Frequently",
"DailyRate": 1392,
"Department": "Research & Development",
"DistanceFromHome": 3,
"Education": 4,
"EducationField": "Life Sciences",
"EmployeeCount": 1,
"EmployeeNumber": 5,
"EnvironmentSatisfaction": 4,
"Gender": "Female",
"HourlyRate": 56,
"JobInvolvement": 3,
"JobLevel": 1,
"JobRole": "Research Scientist",
"JobSatisfaction": 3,
"MaritalStatus": "Married",
"MonthlyIncome": 2909,
"MonthlyRate": 23159,
"...": "and 15 more columns"
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sn
import matplotlib.pyplot as plt
pd.set_option("display.max_columns", None)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
dataset = pd.read_csv(
"/kaggle/input/ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv"
)
dataset.head()
columns = dataset.columns.to_list()
# checar valores por coluna, valores unicos e se há valores nulos.
for column in columns:
print(dataset[column].unique())
print(sum(dataset[column].isna()))
# somando o check de valores nulos, se Zero, logo não há valores nulos.
# Lista de Dicionários para converter valores não numéricos em numéricos.
dictionary_variables = []
inverted_dictionary = []
dict_default = {
"Education": {
1: "Below College",
2: "College",
3: "Bachelor",
4: "Master",
5: "Doctor",
},
"EnvironmentSatisfaction": {1: "Low", 2: "Medium", 3: "High", 4: "Very High"},
"JobInvolvement": {1: "Low", 2: "Medium", 3: "High", 4: "Very High"},
"JobSatisfaction": {1: "Low", 2: "Medium", 3: "High", 4: "Very High"},
"PerformanceRating": {1: "Low", 2: "Good", 3: "Excellent", 4: "Outstanding"},
"RelationshipSatisfaction": {1: "Low", 2: "Good", 3: "Excellent", 4: "Outstanding"},
"WorkLifeBalance": {1: "Bad", 2: "Good", 3: "Better", 4: "Best"},
}
for column in columns:
try:
dataset[column][2] / 1
dictionary_variables.append([column, 0])
try:
dict_defaul[column]
inverted_dictionary.append([column, dict_default[column]])
except:
inverted_dictionary.append([column, 0])
except:
classes = dataset[column].unique()
inverted_dictionary.append([column, dict(zip(range(len(classes)), classes))])
dictionary_variables.append([column, dict(zip(classes, range(len(classes))))])
print(inverted_dictionary)
dataset.describe()
# convertendo o dataset para numérico
for value in dictionary_variables:
if not value[1] == 0:
dataset[value[0]] = dataset[value[0]].map(value[1])
dataset.head()
dataset.describe()
plt.subplots(figsize=(20, 15))
sn.heatmap(dataset.corr(), linewidth=0.2)
# convertendo o dataset para categórico
for value in range(len(inverted_dictionary)):
if not inverted_dictionary[value][1] == 0:
dataset[inverted_dictionary[value][0]] = dataset[
inverted_dictionary[value][0]
].map(inverted_dictionary[value][1])
if dictionary_variables[value][0] == "Attrition":
dataset[dictionary_variables[value][0]] = dataset[
dictionary_variables[value][0]
].map(dictionary_variables[value][1])
dataset.head()
plt.subplots(figsize=(20, 15))
sn.heatmap(dataset.corr(), linewidth=0.2)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/987/129987615.ipynb
|
ibm-hr-analytics-attrition-dataset
|
pavansubhasht
|
[{"Id": 129987615, "ScriptId": 38665575, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4609995, "CreationDate": "05/18/2023 00:38:43", "VersionNumber": 1.0, "Title": "notebook455d960c79", "EvaluationDate": "05/18/2023", "IsChange": true, "TotalLines": 93.0, "LinesInsertedFromPrevious": 93.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186434867, "KernelVersionId": 129987615, "SourceDatasetVersionId": 1925}]
|
[{"Id": 1925, "DatasetId": 1067, "DatasourceVersionId": 1925, "CreatorUserId": 862007, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "03/31/2017 06:55:16", "VersionNumber": 1.0, "Title": "IBM HR Analytics Employee Attrition & Performance", "Slug": "ibm-hr-analytics-attrition-dataset", "Subtitle": "Predict attrition of your valuable employees", "Description": "Uncover the factors that lead to employee attrition and explore important questions such as \u2018show me a breakdown of distance from home by job role and attrition\u2019 or \u2018compare average monthly income by education and attrition\u2019. This is a fictional data set created by IBM data scientists.\n\nEducation\n\t1 'Below College'\n\t2 'College'\n\t3 'Bachelor'\n\t4 'Master'\n\t5 'Doctor'\n\t\nEnvironmentSatisfaction\n\t1 'Low'\n\t2 'Medium'\n\t3 'High'\n\t4 'Very High'\n\t\nJobInvolvement\t\n 1 'Low'\n\t2 'Medium'\n\t3 'High'\n\t4 'Very High'\n\t\nJobSatisfaction\t\n 1 'Low'\n\t2 'Medium'\n\t3 'High'\n\t4 'Very High'\n\t\nPerformanceRating\t\n 1 'Low'\n\t2 'Good'\n\t3 'Excellent'\n\t4 'Outstanding'\n\t\nRelationshipSatisfaction\t\n 1 'Low'\n\t2 'Medium'\n\t3 'High'\n\t4 'Very High'\n\t\nWorkLifeBalance\t\n 1 'Bad'\n\t2 'Good'\n\t3 'Better'\n\t4 'Best'", "VersionNotes": "Initial release", "TotalCompressedBytes": 227977.0, "TotalUncompressedBytes": 227977.0}]
|
[{"Id": 1067, "CreatorUserId": 862007, "OwnerUserId": 862007.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1925.0, "CurrentDatasourceVersionId": 1925.0, "ForumId": 3045, "Type": 2, "CreationDate": "03/31/2017 06:55:16", "LastActivityDate": "02/06/2018", "TotalViews": 1254989, "TotalDownloads": 142350, "TotalVotes": 2254, "TotalKernels": 821}]
|
[{"Id": 862007, "UserName": "pavansubhasht", "DisplayName": "pavansubhash", "RegisterDate": "01/10/2017", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sn
import matplotlib.pyplot as plt
pd.set_option("display.max_columns", None)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
dataset = pd.read_csv(
"/kaggle/input/ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv"
)
dataset.head()
columns = dataset.columns.to_list()
# checar valores por coluna, valores unicos e se há valores nulos.
for column in columns:
print(dataset[column].unique())
print(sum(dataset[column].isna()))
# somando o check de valores nulos, se Zero, logo não há valores nulos.
# Lista de Dicionários para converter valores não numéricos em numéricos.
dictionary_variables = []
inverted_dictionary = []
dict_default = {
"Education": {
1: "Below College",
2: "College",
3: "Bachelor",
4: "Master",
5: "Doctor",
},
"EnvironmentSatisfaction": {1: "Low", 2: "Medium", 3: "High", 4: "Very High"},
"JobInvolvement": {1: "Low", 2: "Medium", 3: "High", 4: "Very High"},
"JobSatisfaction": {1: "Low", 2: "Medium", 3: "High", 4: "Very High"},
"PerformanceRating": {1: "Low", 2: "Good", 3: "Excellent", 4: "Outstanding"},
"RelationshipSatisfaction": {1: "Low", 2: "Good", 3: "Excellent", 4: "Outstanding"},
"WorkLifeBalance": {1: "Bad", 2: "Good", 3: "Better", 4: "Best"},
}
for column in columns:
try:
dataset[column][2] / 1
dictionary_variables.append([column, 0])
try:
dict_defaul[column]
inverted_dictionary.append([column, dict_default[column]])
except:
inverted_dictionary.append([column, 0])
except:
classes = dataset[column].unique()
inverted_dictionary.append([column, dict(zip(range(len(classes)), classes))])
dictionary_variables.append([column, dict(zip(classes, range(len(classes))))])
print(inverted_dictionary)
dataset.describe()
# convertendo o dataset para numérico
for value in dictionary_variables:
if not value[1] == 0:
dataset[value[0]] = dataset[value[0]].map(value[1])
dataset.head()
dataset.describe()
plt.subplots(figsize=(20, 15))
sn.heatmap(dataset.corr(), linewidth=0.2)
# convertendo o dataset para categórico
for value in range(len(inverted_dictionary)):
if not inverted_dictionary[value][1] == 0:
dataset[inverted_dictionary[value][0]] = dataset[
inverted_dictionary[value][0]
].map(inverted_dictionary[value][1])
if dictionary_variables[value][0] == "Attrition":
dataset[dictionary_variables[value][0]] = dataset[
dictionary_variables[value][0]
].map(dictionary_variables[value][1])
dataset.head()
plt.subplots(figsize=(20, 15))
sn.heatmap(dataset.corr(), linewidth=0.2)
|
[{"ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv": {"column_names": "[\"Age\", \"Attrition\", \"BusinessTravel\", \"DailyRate\", \"Department\", \"DistanceFromHome\", \"Education\", \"EducationField\", \"EmployeeCount\", \"EmployeeNumber\", \"EnvironmentSatisfaction\", \"Gender\", \"HourlyRate\", \"JobInvolvement\", \"JobLevel\", \"JobRole\", \"JobSatisfaction\", \"MaritalStatus\", \"MonthlyIncome\", \"MonthlyRate\", \"NumCompaniesWorked\", \"Over18\", \"OverTime\", \"PercentSalaryHike\", \"PerformanceRating\", \"RelationshipSatisfaction\", \"StandardHours\", \"StockOptionLevel\", \"TotalWorkingYears\", \"TrainingTimesLastYear\", \"WorkLifeBalance\", \"YearsAtCompany\", \"YearsInCurrentRole\", \"YearsSinceLastPromotion\", \"YearsWithCurrManager\"]", "column_data_types": "{\"Age\": \"int64\", \"Attrition\": \"object\", \"BusinessTravel\": \"object\", \"DailyRate\": \"int64\", \"Department\": \"object\", \"DistanceFromHome\": \"int64\", \"Education\": \"int64\", \"EducationField\": \"object\", \"EmployeeCount\": \"int64\", \"EmployeeNumber\": \"int64\", \"EnvironmentSatisfaction\": \"int64\", \"Gender\": \"object\", \"HourlyRate\": \"int64\", \"JobInvolvement\": \"int64\", \"JobLevel\": \"int64\", \"JobRole\": \"object\", \"JobSatisfaction\": \"int64\", \"MaritalStatus\": \"object\", \"MonthlyIncome\": \"int64\", \"MonthlyRate\": \"int64\", \"NumCompaniesWorked\": \"int64\", \"Over18\": \"object\", \"OverTime\": \"object\", \"PercentSalaryHike\": \"int64\", \"PerformanceRating\": \"int64\", \"RelationshipSatisfaction\": \"int64\", \"StandardHours\": \"int64\", \"StockOptionLevel\": \"int64\", \"TotalWorkingYears\": \"int64\", \"TrainingTimesLastYear\": \"int64\", \"WorkLifeBalance\": \"int64\", \"YearsAtCompany\": \"int64\", \"YearsInCurrentRole\": \"int64\", \"YearsSinceLastPromotion\": \"int64\", \"YearsWithCurrManager\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1470 entries, 0 to 1469\nData columns (total 35 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Age 1470 non-null int64 \n 1 Attrition 1470 non-null object\n 2 BusinessTravel 1470 non-null object\n 3 DailyRate 1470 non-null int64 \n 4 Department 1470 non-null object\n 5 DistanceFromHome 1470 non-null int64 \n 6 Education 1470 non-null int64 \n 7 EducationField 1470 non-null object\n 8 EmployeeCount 1470 non-null int64 \n 9 EmployeeNumber 1470 non-null int64 \n 10 EnvironmentSatisfaction 1470 non-null int64 \n 11 Gender 1470 non-null object\n 12 HourlyRate 1470 non-null int64 \n 13 JobInvolvement 1470 non-null int64 \n 14 JobLevel 1470 non-null int64 \n 15 JobRole 1470 non-null object\n 16 JobSatisfaction 1470 non-null int64 \n 17 MaritalStatus 1470 non-null object\n 18 MonthlyIncome 1470 non-null int64 \n 19 MonthlyRate 1470 non-null int64 \n 20 NumCompaniesWorked 1470 non-null int64 \n 21 Over18 1470 non-null object\n 22 OverTime 1470 non-null object\n 23 PercentSalaryHike 1470 non-null int64 \n 24 PerformanceRating 1470 non-null int64 \n 25 RelationshipSatisfaction 1470 non-null int64 \n 26 StandardHours 1470 non-null int64 \n 27 StockOptionLevel 1470 non-null int64 \n 28 TotalWorkingYears 1470 non-null int64 \n 29 TrainingTimesLastYear 1470 non-null int64 \n 30 WorkLifeBalance 1470 non-null int64 \n 31 YearsAtCompany 1470 non-null int64 \n 32 YearsInCurrentRole 1470 non-null int64 \n 33 YearsSinceLastPromotion 1470 non-null int64 \n 34 YearsWithCurrManager 1470 non-null int64 \ndtypes: int64(26), object(9)\nmemory usage: 402.1+ KB\n", "summary": "{\"Age\": {\"count\": 1470.0, \"mean\": 36.923809523809524, \"std\": 9.135373489136732, \"min\": 18.0, \"25%\": 30.0, \"50%\": 36.0, \"75%\": 43.0, \"max\": 60.0}, \"DailyRate\": {\"count\": 1470.0, \"mean\": 802.4857142857143, \"std\": 403.50909994352816, \"min\": 102.0, \"25%\": 465.0, \"50%\": 802.0, \"75%\": 1157.0, \"max\": 1499.0}, \"DistanceFromHome\": {\"count\": 1470.0, \"mean\": 9.19251700680272, \"std\": 8.106864435666074, \"min\": 1.0, \"25%\": 2.0, \"50%\": 7.0, \"75%\": 14.0, \"max\": 29.0}, \"Education\": {\"count\": 1470.0, \"mean\": 2.912925170068027, \"std\": 1.0241649445978729, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 4.0, \"max\": 5.0}, \"EmployeeCount\": {\"count\": 1470.0, \"mean\": 1.0, \"std\": 0.0, \"min\": 1.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}, \"EmployeeNumber\": {\"count\": 1470.0, \"mean\": 1024.865306122449, \"std\": 602.0243348474751, \"min\": 1.0, \"25%\": 491.25, \"50%\": 1020.5, \"75%\": 1555.75, \"max\": 2068.0}, \"EnvironmentSatisfaction\": {\"count\": 1470.0, \"mean\": 2.721768707482993, \"std\": 1.0930822146350005, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 4.0, \"max\": 4.0}, \"HourlyRate\": {\"count\": 1470.0, \"mean\": 65.89115646258503, \"std\": 20.329427593996165, \"min\": 30.0, \"25%\": 48.0, \"50%\": 66.0, \"75%\": 83.75, \"max\": 100.0}, \"JobInvolvement\": {\"count\": 1470.0, \"mean\": 2.7299319727891156, \"std\": 0.7115611429632304, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 3.0, \"max\": 4.0}, \"JobLevel\": {\"count\": 1470.0, \"mean\": 2.0639455782312925, \"std\": 1.106939898935122, \"min\": 1.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 3.0, \"max\": 5.0}, \"JobSatisfaction\": {\"count\": 1470.0, \"mean\": 2.7285714285714286, \"std\": 1.1028461230547204, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 4.0, \"max\": 4.0}, \"MonthlyIncome\": {\"count\": 1470.0, \"mean\": 6502.931292517007, \"std\": 4707.956783097994, \"min\": 1009.0, \"25%\": 2911.0, \"50%\": 4919.0, \"75%\": 8379.0, \"max\": 19999.0}, \"MonthlyRate\": {\"count\": 1470.0, \"mean\": 14313.103401360544, \"std\": 7117.786044059976, \"min\": 2094.0, \"25%\": 8047.0, \"50%\": 14235.5, \"75%\": 20461.5, \"max\": 26999.0}, \"NumCompaniesWorked\": {\"count\": 1470.0, \"mean\": 2.6931972789115646, \"std\": 2.498009006070747, \"min\": 0.0, \"25%\": 1.0, \"50%\": 2.0, \"75%\": 4.0, \"max\": 9.0}, \"PercentSalaryHike\": {\"count\": 1470.0, \"mean\": 15.209523809523809, \"std\": 3.6599377165396407, \"min\": 11.0, \"25%\": 12.0, \"50%\": 14.0, \"75%\": 18.0, \"max\": 25.0}, \"PerformanceRating\": {\"count\": 1470.0, \"mean\": 3.1537414965986397, \"std\": 0.36082352460434397, \"min\": 3.0, \"25%\": 3.0, \"50%\": 3.0, \"75%\": 3.0, \"max\": 4.0}, \"RelationshipSatisfaction\": {\"count\": 1470.0, \"mean\": 2.7122448979591836, \"std\": 1.0812088864403524, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 4.0, \"max\": 4.0}, \"StandardHours\": {\"count\": 1470.0, \"mean\": 80.0, \"std\": 0.0, \"min\": 80.0, \"25%\": 80.0, \"50%\": 80.0, \"75%\": 80.0, \"max\": 80.0}, \"StockOptionLevel\": {\"count\": 1470.0, \"mean\": 0.7938775510204081, \"std\": 0.852076667930838, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 3.0}, \"TotalWorkingYears\": {\"count\": 1470.0, \"mean\": 11.279591836734694, \"std\": 7.780781675514997, \"min\": 0.0, \"25%\": 6.0, \"50%\": 10.0, \"75%\": 15.0, \"max\": 40.0}, \"TrainingTimesLastYear\": {\"count\": 1470.0, \"mean\": 2.7993197278911564, \"std\": 1.2892706207958455, \"min\": 0.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 3.0, \"max\": 6.0}, \"WorkLifeBalance\": {\"count\": 1470.0, \"mean\": 2.7612244897959184, \"std\": 0.7064758297141507, \"min\": 1.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 3.0, \"max\": 4.0}, \"YearsAtCompany\": {\"count\": 1470.0, \"mean\": 7.0081632653061225, \"std\": 6.126525152403569, \"min\": 0.0, \"25%\": 3.0, \"50%\": 5.0, \"75%\": 9.0, \"max\": 40.0}, \"YearsInCurrentRole\": {\"count\": 1470.0, \"mean\": 4.229251700680272, \"std\": 3.623137034670628, \"min\": 0.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 7.0, \"max\": 18.0}, \"YearsSinceLastPromotion\": {\"count\": 1470.0, \"mean\": 2.1877551020408164, \"std\": 3.222430279137967, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 3.0, \"max\": 15.0}, \"YearsWithCurrManager\": {\"count\": 1470.0, \"mean\": 4.12312925170068, \"std\": 3.5681361205404376, \"min\": 0.0, \"25%\": 2.0, \"50%\": 3.0, \"75%\": 7.0, \"max\": 17.0}}", "examples": "{\"Age\":{\"0\":41,\"1\":49,\"2\":37,\"3\":33},\"Attrition\":{\"0\":\"Yes\",\"1\":\"No\",\"2\":\"Yes\",\"3\":\"No\"},\"BusinessTravel\":{\"0\":\"Travel_Rarely\",\"1\":\"Travel_Frequently\",\"2\":\"Travel_Rarely\",\"3\":\"Travel_Frequently\"},\"DailyRate\":{\"0\":1102,\"1\":279,\"2\":1373,\"3\":1392},\"Department\":{\"0\":\"Sales\",\"1\":\"Research & Development\",\"2\":\"Research & Development\",\"3\":\"Research & Development\"},\"DistanceFromHome\":{\"0\":1,\"1\":8,\"2\":2,\"3\":3},\"Education\":{\"0\":2,\"1\":1,\"2\":2,\"3\":4},\"EducationField\":{\"0\":\"Life Sciences\",\"1\":\"Life Sciences\",\"2\":\"Other\",\"3\":\"Life Sciences\"},\"EmployeeCount\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1},\"EmployeeNumber\":{\"0\":1,\"1\":2,\"2\":4,\"3\":5},\"EnvironmentSatisfaction\":{\"0\":2,\"1\":3,\"2\":4,\"3\":4},\"Gender\":{\"0\":\"Female\",\"1\":\"Male\",\"2\":\"Male\",\"3\":\"Female\"},\"HourlyRate\":{\"0\":94,\"1\":61,\"2\":92,\"3\":56},\"JobInvolvement\":{\"0\":3,\"1\":2,\"2\":2,\"3\":3},\"JobLevel\":{\"0\":2,\"1\":2,\"2\":1,\"3\":1},\"JobRole\":{\"0\":\"Sales Executive\",\"1\":\"Research Scientist\",\"2\":\"Laboratory Technician\",\"3\":\"Research Scientist\"},\"JobSatisfaction\":{\"0\":4,\"1\":2,\"2\":3,\"3\":3},\"MaritalStatus\":{\"0\":\"Single\",\"1\":\"Married\",\"2\":\"Single\",\"3\":\"Married\"},\"MonthlyIncome\":{\"0\":5993,\"1\":5130,\"2\":2090,\"3\":2909},\"MonthlyRate\":{\"0\":19479,\"1\":24907,\"2\":2396,\"3\":23159},\"NumCompaniesWorked\":{\"0\":8,\"1\":1,\"2\":6,\"3\":1},\"Over18\":{\"0\":\"Y\",\"1\":\"Y\",\"2\":\"Y\",\"3\":\"Y\"},\"OverTime\":{\"0\":\"Yes\",\"1\":\"No\",\"2\":\"Yes\",\"3\":\"Yes\"},\"PercentSalaryHike\":{\"0\":11,\"1\":23,\"2\":15,\"3\":11},\"PerformanceRating\":{\"0\":3,\"1\":4,\"2\":3,\"3\":3},\"RelationshipSatisfaction\":{\"0\":1,\"1\":4,\"2\":2,\"3\":3},\"StandardHours\":{\"0\":80,\"1\":80,\"2\":80,\"3\":80},\"StockOptionLevel\":{\"0\":0,\"1\":1,\"2\":0,\"3\":0},\"TotalWorkingYears\":{\"0\":8,\"1\":10,\"2\":7,\"3\":8},\"TrainingTimesLastYear\":{\"0\":0,\"1\":3,\"2\":3,\"3\":3},\"WorkLifeBalance\":{\"0\":1,\"1\":3,\"2\":3,\"3\":3},\"YearsAtCompany\":{\"0\":6,\"1\":10,\"2\":0,\"3\":8},\"YearsInCurrentRole\":{\"0\":4,\"1\":7,\"2\":0,\"3\":7},\"YearsSinceLastPromotion\":{\"0\":0,\"1\":1,\"2\":0,\"3\":3},\"YearsWithCurrManager\":{\"0\":5,\"1\":7,\"2\":0,\"3\":0}}"}}]
| true | 1 |
<start_data_description><data_path>ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv:
<column_names>
['Age', 'Attrition', 'BusinessTravel', 'DailyRate', 'Department', 'DistanceFromHome', 'Education', 'EducationField', 'EmployeeCount', 'EmployeeNumber', 'EnvironmentSatisfaction', 'Gender', 'HourlyRate', 'JobInvolvement', 'JobLevel', 'JobRole', 'JobSatisfaction', 'MaritalStatus', 'MonthlyIncome', 'MonthlyRate', 'NumCompaniesWorked', 'Over18', 'OverTime', 'PercentSalaryHike', 'PerformanceRating', 'RelationshipSatisfaction', 'StandardHours', 'StockOptionLevel', 'TotalWorkingYears', 'TrainingTimesLastYear', 'WorkLifeBalance', 'YearsAtCompany', 'YearsInCurrentRole', 'YearsSinceLastPromotion', 'YearsWithCurrManager']
<column_types>
{'Age': 'int64', 'Attrition': 'object', 'BusinessTravel': 'object', 'DailyRate': 'int64', 'Department': 'object', 'DistanceFromHome': 'int64', 'Education': 'int64', 'EducationField': 'object', 'EmployeeCount': 'int64', 'EmployeeNumber': 'int64', 'EnvironmentSatisfaction': 'int64', 'Gender': 'object', 'HourlyRate': 'int64', 'JobInvolvement': 'int64', 'JobLevel': 'int64', 'JobRole': 'object', 'JobSatisfaction': 'int64', 'MaritalStatus': 'object', 'MonthlyIncome': 'int64', 'MonthlyRate': 'int64', 'NumCompaniesWorked': 'int64', 'Over18': 'object', 'OverTime': 'object', 'PercentSalaryHike': 'int64', 'PerformanceRating': 'int64', 'RelationshipSatisfaction': 'int64', 'StandardHours': 'int64', 'StockOptionLevel': 'int64', 'TotalWorkingYears': 'int64', 'TrainingTimesLastYear': 'int64', 'WorkLifeBalance': 'int64', 'YearsAtCompany': 'int64', 'YearsInCurrentRole': 'int64', 'YearsSinceLastPromotion': 'int64', 'YearsWithCurrManager': 'int64'}
<dataframe_Summary>
{'Age': {'count': 1470.0, 'mean': 36.923809523809524, 'std': 9.135373489136732, 'min': 18.0, '25%': 30.0, '50%': 36.0, '75%': 43.0, 'max': 60.0}, 'DailyRate': {'count': 1470.0, 'mean': 802.4857142857143, 'std': 403.50909994352816, 'min': 102.0, '25%': 465.0, '50%': 802.0, '75%': 1157.0, 'max': 1499.0}, 'DistanceFromHome': {'count': 1470.0, 'mean': 9.19251700680272, 'std': 8.106864435666074, 'min': 1.0, '25%': 2.0, '50%': 7.0, '75%': 14.0, 'max': 29.0}, 'Education': {'count': 1470.0, 'mean': 2.912925170068027, 'std': 1.0241649445978729, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 4.0, 'max': 5.0}, 'EmployeeCount': {'count': 1470.0, 'mean': 1.0, 'std': 0.0, 'min': 1.0, '25%': 1.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}, 'EmployeeNumber': {'count': 1470.0, 'mean': 1024.865306122449, 'std': 602.0243348474751, 'min': 1.0, '25%': 491.25, '50%': 1020.5, '75%': 1555.75, 'max': 2068.0}, 'EnvironmentSatisfaction': {'count': 1470.0, 'mean': 2.721768707482993, 'std': 1.0930822146350005, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 4.0, 'max': 4.0}, 'HourlyRate': {'count': 1470.0, 'mean': 65.89115646258503, 'std': 20.329427593996165, 'min': 30.0, '25%': 48.0, '50%': 66.0, '75%': 83.75, 'max': 100.0}, 'JobInvolvement': {'count': 1470.0, 'mean': 2.7299319727891156, 'std': 0.7115611429632304, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 3.0, 'max': 4.0}, 'JobLevel': {'count': 1470.0, 'mean': 2.0639455782312925, 'std': 1.106939898935122, 'min': 1.0, '25%': 1.0, '50%': 2.0, '75%': 3.0, 'max': 5.0}, 'JobSatisfaction': {'count': 1470.0, 'mean': 2.7285714285714286, 'std': 1.1028461230547204, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 4.0, 'max': 4.0}, 'MonthlyIncome': {'count': 1470.0, 'mean': 6502.931292517007, 'std': 4707.956783097994, 'min': 1009.0, '25%': 2911.0, '50%': 4919.0, '75%': 8379.0, 'max': 19999.0}, 'MonthlyRate': {'count': 1470.0, 'mean': 14313.103401360544, 'std': 7117.786044059976, 'min': 2094.0, '25%': 8047.0, '50%': 14235.5, '75%': 20461.5, 'max': 26999.0}, 'NumCompaniesWorked': {'count': 1470.0, 'mean': 2.6931972789115646, 'std': 2.498009006070747, 'min': 0.0, '25%': 1.0, '50%': 2.0, '75%': 4.0, 'max': 9.0}, 'PercentSalaryHike': {'count': 1470.0, 'mean': 15.209523809523809, 'std': 3.6599377165396407, 'min': 11.0, '25%': 12.0, '50%': 14.0, '75%': 18.0, 'max': 25.0}, 'PerformanceRating': {'count': 1470.0, 'mean': 3.1537414965986397, 'std': 0.36082352460434397, 'min': 3.0, '25%': 3.0, '50%': 3.0, '75%': 3.0, 'max': 4.0}, 'RelationshipSatisfaction': {'count': 1470.0, 'mean': 2.7122448979591836, 'std': 1.0812088864403524, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 4.0, 'max': 4.0}, 'StandardHours': {'count': 1470.0, 'mean': 80.0, 'std': 0.0, 'min': 80.0, '25%': 80.0, '50%': 80.0, '75%': 80.0, 'max': 80.0}, 'StockOptionLevel': {'count': 1470.0, 'mean': 0.7938775510204081, 'std': 0.852076667930838, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 3.0}, 'TotalWorkingYears': {'count': 1470.0, 'mean': 11.279591836734694, 'std': 7.780781675514997, 'min': 0.0, '25%': 6.0, '50%': 10.0, '75%': 15.0, 'max': 40.0}, 'TrainingTimesLastYear': {'count': 1470.0, 'mean': 2.7993197278911564, 'std': 1.2892706207958455, 'min': 0.0, '25%': 2.0, '50%': 3.0, '75%': 3.0, 'max': 6.0}, 'WorkLifeBalance': {'count': 1470.0, 'mean': 2.7612244897959184, 'std': 0.7064758297141507, 'min': 1.0, '25%': 2.0, '50%': 3.0, '75%': 3.0, 'max': 4.0}, 'YearsAtCompany': {'count': 1470.0, 'mean': 7.0081632653061225, 'std': 6.126525152403569, 'min': 0.0, '25%': 3.0, '50%': 5.0, '75%': 9.0, 'max': 40.0}, 'YearsInCurrentRole': {'count': 1470.0, 'mean': 4.229251700680272, 'std': 3.623137034670628, 'min': 0.0, '25%': 2.0, '50%': 3.0, '75%': 7.0, 'max': 18.0}, 'YearsSinceLastPromotion': {'count': 1470.0, 'mean': 2.1877551020408164, 'std': 3.222430279137967, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 3.0, 'max': 15.0}, 'YearsWithCurrManager': {'count': 1470.0, 'mean': 4.12312925170068, 'std': 3.5681361205404376, 'min': 0.0, '25%': 2.0, '50%': 3.0, '75%': 7.0, 'max': 17.0}}
<dataframe_info>
RangeIndex: 1470 entries, 0 to 1469
Data columns (total 35 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 1470 non-null int64
1 Attrition 1470 non-null object
2 BusinessTravel 1470 non-null object
3 DailyRate 1470 non-null int64
4 Department 1470 non-null object
5 DistanceFromHome 1470 non-null int64
6 Education 1470 non-null int64
7 EducationField 1470 non-null object
8 EmployeeCount 1470 non-null int64
9 EmployeeNumber 1470 non-null int64
10 EnvironmentSatisfaction 1470 non-null int64
11 Gender 1470 non-null object
12 HourlyRate 1470 non-null int64
13 JobInvolvement 1470 non-null int64
14 JobLevel 1470 non-null int64
15 JobRole 1470 non-null object
16 JobSatisfaction 1470 non-null int64
17 MaritalStatus 1470 non-null object
18 MonthlyIncome 1470 non-null int64
19 MonthlyRate 1470 non-null int64
20 NumCompaniesWorked 1470 non-null int64
21 Over18 1470 non-null object
22 OverTime 1470 non-null object
23 PercentSalaryHike 1470 non-null int64
24 PerformanceRating 1470 non-null int64
25 RelationshipSatisfaction 1470 non-null int64
26 StandardHours 1470 non-null int64
27 StockOptionLevel 1470 non-null int64
28 TotalWorkingYears 1470 non-null int64
29 TrainingTimesLastYear 1470 non-null int64
30 WorkLifeBalance 1470 non-null int64
31 YearsAtCompany 1470 non-null int64
32 YearsInCurrentRole 1470 non-null int64
33 YearsSinceLastPromotion 1470 non-null int64
34 YearsWithCurrManager 1470 non-null int64
dtypes: int64(26), object(9)
memory usage: 402.1+ KB
<some_examples>
{'Age': {'0': 41, '1': 49, '2': 37, '3': 33}, 'Attrition': {'0': 'Yes', '1': 'No', '2': 'Yes', '3': 'No'}, 'BusinessTravel': {'0': 'Travel_Rarely', '1': 'Travel_Frequently', '2': 'Travel_Rarely', '3': 'Travel_Frequently'}, 'DailyRate': {'0': 1102, '1': 279, '2': 1373, '3': 1392}, 'Department': {'0': 'Sales', '1': 'Research & Development', '2': 'Research & Development', '3': 'Research & Development'}, 'DistanceFromHome': {'0': 1, '1': 8, '2': 2, '3': 3}, 'Education': {'0': 2, '1': 1, '2': 2, '3': 4}, 'EducationField': {'0': 'Life Sciences', '1': 'Life Sciences', '2': 'Other', '3': 'Life Sciences'}, 'EmployeeCount': {'0': 1, '1': 1, '2': 1, '3': 1}, 'EmployeeNumber': {'0': 1, '1': 2, '2': 4, '3': 5}, 'EnvironmentSatisfaction': {'0': 2, '1': 3, '2': 4, '3': 4}, 'Gender': {'0': 'Female', '1': 'Male', '2': 'Male', '3': 'Female'}, 'HourlyRate': {'0': 94, '1': 61, '2': 92, '3': 56}, 'JobInvolvement': {'0': 3, '1': 2, '2': 2, '3': 3}, 'JobLevel': {'0': 2, '1': 2, '2': 1, '3': 1}, 'JobRole': {'0': 'Sales Executive', '1': 'Research Scientist', '2': 'Laboratory Technician', '3': 'Research Scientist'}, 'JobSatisfaction': {'0': 4, '1': 2, '2': 3, '3': 3}, 'MaritalStatus': {'0': 'Single', '1': 'Married', '2': 'Single', '3': 'Married'}, 'MonthlyIncome': {'0': 5993, '1': 5130, '2': 2090, '3': 2909}, 'MonthlyRate': {'0': 19479, '1': 24907, '2': 2396, '3': 23159}, 'NumCompaniesWorked': {'0': 8, '1': 1, '2': 6, '3': 1}, 'Over18': {'0': 'Y', '1': 'Y', '2': 'Y', '3': 'Y'}, 'OverTime': {'0': 'Yes', '1': 'No', '2': 'Yes', '3': 'Yes'}, 'PercentSalaryHike': {'0': 11, '1': 23, '2': 15, '3': 11}, 'PerformanceRating': {'0': 3, '1': 4, '2': 3, '3': 3}, 'RelationshipSatisfaction': {'0': 1, '1': 4, '2': 2, '3': 3}, 'StandardHours': {'0': 80, '1': 80, '2': 80, '3': 80}, 'StockOptionLevel': {'0': 0, '1': 1, '2': 0, '3': 0}, 'TotalWorkingYears': {'0': 8, '1': 10, '2': 7, '3': 8}, 'TrainingTimesLastYear': {'0': 0, '1': 3, '2': 3, '3': 3}, 'WorkLifeBalance': {'0': 1, '1': 3, '2': 3, '3': 3}, 'YearsAtCompany': {'0': 6, '1': 10, '2': 0, '3': 8}, 'YearsInCurrentRole': {'0': 4, '1': 7, '2': 0, '3': 7}, 'YearsSinceLastPromotion': {'0': 0, '1': 1, '2': 0, '3': 3}, 'YearsWithCurrManager': {'0': 5, '1': 7, '2': 0, '3': 0}}
<end_description>
| 995 | 0 | 2,856 | 995 |
129987774
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
d = 5 # slit seperation
max_y = 20 # wall distance
max_x = 20 # peaks included
point_step = 0.1 # amount of points between 0 and max
wavelength = 5
x_positive = d / 2
x_negative = 0 - x_positive
x_list = []
y_list = []
z_list = []
z_list_positive = []
z_list_negative = []
len_positive = []
for y in np.arange(0, max_y, point_step):
for x in np.arange(-max_x, max_x, point_step):
length_positive = np.sqrt((y**2) + ((x - x_positive) ** 2))
length_negative = np.sqrt((y**2) + ((x - x_negative) ** 2))
z_positive = np.sin(((2 * np.pi) / wavelength) * length_positive)
z_negative = np.sin(((2 * np.pi) / wavelength) * length_negative)
z = z_positive + z_negative
x_list.append(x)
y_list.append(y)
z_list.append(z)
z_list_positive.append(z_positive)
z_list_negative.append(z_negative)
X, Y, Z = np.array(x_list), np.array(y_list), np.array(z_list)
Z_neg, Z_pos = np.array(z_list_negative), np.array(z_list_positive)
fig = plt.figure(figsize=(12, 10))
ax = plt.axes(projection="3d")
surf = ax.plot_trisurf(X, Y, Z, cmap=plt.cm.cividis)
plt.show()
fig = plt.figure(figsize=(12, 10))
ax = plt.axes(projection="3d")
surf = ax.plot_trisurf(X, Y, Z_pos, cmap=plt.cm.cividis)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/987/129987774.ipynb
| null | null |
[{"Id": 129987774, "ScriptId": 38656614, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14500225, "CreationDate": "05/18/2023 00:41:45", "VersionNumber": 2.0, "Title": "Phys-Project", "EvaluationDate": "05/18/2023", "IsChange": true, "TotalLines": 64.0, "LinesInsertedFromPrevious": 2.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 62.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
d = 5 # slit seperation
max_y = 20 # wall distance
max_x = 20 # peaks included
point_step = 0.1 # amount of points between 0 and max
wavelength = 5
x_positive = d / 2
x_negative = 0 - x_positive
x_list = []
y_list = []
z_list = []
z_list_positive = []
z_list_negative = []
len_positive = []
for y in np.arange(0, max_y, point_step):
for x in np.arange(-max_x, max_x, point_step):
length_positive = np.sqrt((y**2) + ((x - x_positive) ** 2))
length_negative = np.sqrt((y**2) + ((x - x_negative) ** 2))
z_positive = np.sin(((2 * np.pi) / wavelength) * length_positive)
z_negative = np.sin(((2 * np.pi) / wavelength) * length_negative)
z = z_positive + z_negative
x_list.append(x)
y_list.append(y)
z_list.append(z)
z_list_positive.append(z_positive)
z_list_negative.append(z_negative)
X, Y, Z = np.array(x_list), np.array(y_list), np.array(z_list)
Z_neg, Z_pos = np.array(z_list_negative), np.array(z_list_positive)
fig = plt.figure(figsize=(12, 10))
ax = plt.axes(projection="3d")
surf = ax.plot_trisurf(X, Y, Z, cmap=plt.cm.cividis)
plt.show()
fig = plt.figure(figsize=(12, 10))
ax = plt.axes(projection="3d")
surf = ax.plot_trisurf(X, Y, Z_pos, cmap=plt.cm.cividis)
plt.show()
| false | 0 | 701 | 0 | 701 | 701 |
||
129987958
|
import numpy as np
import pandas as pd
trainAbs = pd.read_csv("/kaggle/input/data-set/test-abstract.csv", encoding="latin-1")
trainTargets = pd.read_csv(
"/kaggle/input/data-set/test-targets.csv", encoding="latin-1"
)
test_df = pd.merge(trainAbs, trainTargets, on="ReviewID", how="inner")
test_df.to_csv("/kaggle/working/Final_test.csv", index=False)
train_df = pd.read_csv(
"/kaggle/input/data-set/Final_train.csv",
encoding="latin-1",
usecols=["Abstract", "Target"],
)
train_df
train_df = train_df.rename(columns={"Abstract": "source_text", "Target": "target_text"})
train_df
train_df["source_text"] = "summarize: " + train_df["source_text"]
train_df
test_df = pd.read_csv(
"/kaggle/working/Final_test.csv", encoding="latin-1", usecols=["Abstract", "Target"]
)
test_df
test_df = test_df.rename(columns={"Abstract": "source_text", "Target": "target_text"})
test_df
test_df["source_text"] = "summarize: " + test_df["source_text"]
test_df
from simplet5 import SimpleT5
model = SimpleT5()
model.from_pretrained(model_type="t5", model_name="t5-base")
train_df = train_df.applymap(str)
test_df = test_df.applymap(str)
model.train(
train_df=train_df,
eval_df=test_df,
source_max_token_len=512,
target_max_token_len=128,
outputdir="/kaggle/working/Outputs",
batch_size=10,
max_epochs=6,
use_gpu=True,
dataloader_num_workers=4,
)
import matplotlib.pyplot as plt
# Retrieve the accuracy values from the dataframes
train_accuracy = model.training_stats["train_acc"]
eval_accuracy = model.training_stats["eval_acc"]
# Create the x-axis values (epochs)
epochs = range(1, len(train_accuracy) + 1)
# Plot the accuracy values
plt.plot(epochs, train_accuracy, label="Train Accuracy")
plt.plot(epochs, eval_accuracy, label="Eval Accuracy")
# Add labels and title
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.title("Accuracy During Training")
# Add legend
plt.legend()
# Show the plot
plt.show()
import os
os.chdir(r"/kaggle/working")
from IPython.display import FileLinks
FileLinks(r"Outputs")
model.load_model(
"t5", "Outputs/simplet5-epoch-5-train-loss-0.547-val-loss-3.5378", use_gpu=False
)
text = """mutation that prevents certain amino acids from entering neurons leads to the cells’ death early in brain development, according to a new study in mice. The findings provide clues to what happens in the brains of people with the mutation, which is linked to autism.
The mutation affects the SLC7A5 gene, which encodes a protein that transports some large amino acids across the blood-brain barrier. Most of these amino acids are essential, meaning the body cannot produce them and has to get them from food. Mice missing the SLC7A5 gene in cells of the blood-brain barrier develop microcephaly, or an unusually small brain, after birth and have motor and social difficulties, a 2016 study showed.
In the new study, published last month in Cell, the same team of researchers discovered that neurons in the mouse brain also express SLC7A5. Knocking the gene out of some of those neurons starves the cells of amino acids and causes them to die.
“Obviously neurons need some fuel,” says lead researcher Gaia Novarino, professor of neuroscience at the Institute of Science and Technology in Klosterneuburg, Austria. It’s interesting “to really see that our neurons are dependent on that level, specifically at certain stages, on those amino acids.”
It was known that, in the developing brain, neural progenitor cells get energy through anaerobic glycolysis—that is, by breaking down glucose in the absence of oxygen. Later, support cells called astrocytes feed mature neurons the vast amounts of energy needed to fire and reset. Surprisingly, there was little information about what happens in between, when neurons begin to fire but do not yet have support from astrocytes.
It turns out that during this transitional period, young neurons get their energy by metabolizing a set of essential amino acids called branched-chain amino acids (BCAAs), Novarino’s team discovered by analyzing the metabolomes of developing mouse neurons.
“That was eye-opening to me, how dramatically the metabolism of the cell seems to change,” says John Jay Gargus, director of the Center for Autism Research and Translation at the University of California, Irvine, who was not involved in the study.
BCAAs are among the amino acids that SLC7A5 transports. In mice, young neurons missing the gene are therefore starved of their primary energy source. These neurons switch to running on lipids, but they fire less frequently than usual and then disappear within 10 days after birth, the team found. As a result, SLC7A5 mice have smaller brains than controls do.
Similar to these mice, two children with SLC7A5 mutations, whom Novarino and her team identified and monitored after their 2016 study, were born with mild microcephaly that became more pronounced within seven months.
It is not clear why the brain seems to be partially protected from SLC7A5 mutations before birth, but perhaps amino acid levels during that period are higher overall or controlled by different transporters, Novarino speculates.
“I think this paper is going to have a huge impact on the field,” says David Amaral, distinguished professor of psychiatry and behavioral sciences at the University of California, Davis MIND Institute, who was not involved in the new work. “It shows that changes in metabolic pathways can have wide-range effects downstream.”
He says the research aligns with a larger pattern in the field: About 17 percent of autistic children show an imbalance in their amino acid levels, according to a 2018 study that Amaral led.
“With this particular mutation, there may be difficulties in trying to find a targeted approach,” Amaral says. But by paying more attention to the metabolomes—that is, the molecules that are used and produced in metabolism—of people with autism, researchers might be able to identify subtypes that could be treatable. “I think the bigger picture is that as we unravel some of these metabolic disturbances … there well may be some potential for effective treatment.”"""
i = 0
list_text = []
while i < len(text):
list_text.append(model.predict(text[i : i + 512]))
i += 512
pred_text = "-\n".join(str(item) for s in list_text for item in s)
print(pred_text)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/987/129987958.ipynb
| null | null |
[{"Id": 129987958, "ScriptId": 38645170, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10263393, "CreationDate": "05/18/2023 00:44:58", "VersionNumber": 1.0, "Title": "Simple T5 training", "EvaluationDate": NaN, "IsChange": true, "TotalLines": 125.0, "LinesInsertedFromPrevious": 125.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
trainAbs = pd.read_csv("/kaggle/input/data-set/test-abstract.csv", encoding="latin-1")
trainTargets = pd.read_csv(
"/kaggle/input/data-set/test-targets.csv", encoding="latin-1"
)
test_df = pd.merge(trainAbs, trainTargets, on="ReviewID", how="inner")
test_df.to_csv("/kaggle/working/Final_test.csv", index=False)
train_df = pd.read_csv(
"/kaggle/input/data-set/Final_train.csv",
encoding="latin-1",
usecols=["Abstract", "Target"],
)
train_df
train_df = train_df.rename(columns={"Abstract": "source_text", "Target": "target_text"})
train_df
train_df["source_text"] = "summarize: " + train_df["source_text"]
train_df
test_df = pd.read_csv(
"/kaggle/working/Final_test.csv", encoding="latin-1", usecols=["Abstract", "Target"]
)
test_df
test_df = test_df.rename(columns={"Abstract": "source_text", "Target": "target_text"})
test_df
test_df["source_text"] = "summarize: " + test_df["source_text"]
test_df
from simplet5 import SimpleT5
model = SimpleT5()
model.from_pretrained(model_type="t5", model_name="t5-base")
train_df = train_df.applymap(str)
test_df = test_df.applymap(str)
model.train(
train_df=train_df,
eval_df=test_df,
source_max_token_len=512,
target_max_token_len=128,
outputdir="/kaggle/working/Outputs",
batch_size=10,
max_epochs=6,
use_gpu=True,
dataloader_num_workers=4,
)
import matplotlib.pyplot as plt
# Retrieve the accuracy values from the dataframes
train_accuracy = model.training_stats["train_acc"]
eval_accuracy = model.training_stats["eval_acc"]
# Create the x-axis values (epochs)
epochs = range(1, len(train_accuracy) + 1)
# Plot the accuracy values
plt.plot(epochs, train_accuracy, label="Train Accuracy")
plt.plot(epochs, eval_accuracy, label="Eval Accuracy")
# Add labels and title
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.title("Accuracy During Training")
# Add legend
plt.legend()
# Show the plot
plt.show()
import os
os.chdir(r"/kaggle/working")
from IPython.display import FileLinks
FileLinks(r"Outputs")
model.load_model(
"t5", "Outputs/simplet5-epoch-5-train-loss-0.547-val-loss-3.5378", use_gpu=False
)
text = """mutation that prevents certain amino acids from entering neurons leads to the cells’ death early in brain development, according to a new study in mice. The findings provide clues to what happens in the brains of people with the mutation, which is linked to autism.
The mutation affects the SLC7A5 gene, which encodes a protein that transports some large amino acids across the blood-brain barrier. Most of these amino acids are essential, meaning the body cannot produce them and has to get them from food. Mice missing the SLC7A5 gene in cells of the blood-brain barrier develop microcephaly, or an unusually small brain, after birth and have motor and social difficulties, a 2016 study showed.
In the new study, published last month in Cell, the same team of researchers discovered that neurons in the mouse brain also express SLC7A5. Knocking the gene out of some of those neurons starves the cells of amino acids and causes them to die.
“Obviously neurons need some fuel,” says lead researcher Gaia Novarino, professor of neuroscience at the Institute of Science and Technology in Klosterneuburg, Austria. It’s interesting “to really see that our neurons are dependent on that level, specifically at certain stages, on those amino acids.”
It was known that, in the developing brain, neural progenitor cells get energy through anaerobic glycolysis—that is, by breaking down glucose in the absence of oxygen. Later, support cells called astrocytes feed mature neurons the vast amounts of energy needed to fire and reset. Surprisingly, there was little information about what happens in between, when neurons begin to fire but do not yet have support from astrocytes.
It turns out that during this transitional period, young neurons get their energy by metabolizing a set of essential amino acids called branched-chain amino acids (BCAAs), Novarino’s team discovered by analyzing the metabolomes of developing mouse neurons.
“That was eye-opening to me, how dramatically the metabolism of the cell seems to change,” says John Jay Gargus, director of the Center for Autism Research and Translation at the University of California, Irvine, who was not involved in the study.
BCAAs are among the amino acids that SLC7A5 transports. In mice, young neurons missing the gene are therefore starved of their primary energy source. These neurons switch to running on lipids, but they fire less frequently than usual and then disappear within 10 days after birth, the team found. As a result, SLC7A5 mice have smaller brains than controls do.
Similar to these mice, two children with SLC7A5 mutations, whom Novarino and her team identified and monitored after their 2016 study, were born with mild microcephaly that became more pronounced within seven months.
It is not clear why the brain seems to be partially protected from SLC7A5 mutations before birth, but perhaps amino acid levels during that period are higher overall or controlled by different transporters, Novarino speculates.
“I think this paper is going to have a huge impact on the field,” says David Amaral, distinguished professor of psychiatry and behavioral sciences at the University of California, Davis MIND Institute, who was not involved in the new work. “It shows that changes in metabolic pathways can have wide-range effects downstream.”
He says the research aligns with a larger pattern in the field: About 17 percent of autistic children show an imbalance in their amino acid levels, according to a 2018 study that Amaral led.
“With this particular mutation, there may be difficulties in trying to find a targeted approach,” Amaral says. But by paying more attention to the metabolomes—that is, the molecules that are used and produced in metabolism—of people with autism, researchers might be able to identify subtypes that could be treatable. “I think the bigger picture is that as we unravel some of these metabolic disturbances … there well may be some potential for effective treatment.”"""
i = 0
list_text = []
while i < len(text):
list_text.append(model.predict(text[i : i + 512]))
i += 512
pred_text = "-\n".join(str(item) for s in list_text for item in s)
print(pred_text)
| false | 0 | 1,790 | 0 | 1,790 | 1,790 |
||
129987555
|
<jupyter_start><jupyter_text>EVs - One Electric Vehicle Dataset - Smaller
**CONTEXT**:
This is a dataset of electric vehicles.
One of the more popular data science datasets is the mtcars dataset. It is known for its simplicity when running analysis and visualizations.
When looking for simple datasets on EVs, there don't seem to be any. Also, given the growth in this market, this is something many would be curious about. Hence, the reason for creating this dataset.
For more information, please visit the data source below.
**TASKS**:
Some basic tasks would include
1. Which car has the fastest 0-100 acceleration?
2. Which has the highest efficiency?
3. Does a difference in power train effect the range, top speed, efficiency?
4. Which manufacturer has the most number of vehicles?
5. How does price relate to rapid charging?
**CONTENT**:
I've included two datasets below:
1. 'ElectricCarData_Clean.csv'
-- original pulled data.
2. 'ElectricCarData_Norm.csv'
-- units removed from each of the rows
-- rapid charge has a binary yes/no value
The point of both is to have users practice some data cleaning.
**CREDITS**:
There are two credits and sourcing that needs to be mentioned:
1. *Datasource*: ev-database.org/
2.*Banner image*: freepik - author - 'macrovector'
**UPDATES**:
There will be future updates when we can attain additional data.
Kaggle dataset identifier: evs-one-electric-vehicle-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('evs-one-electric-vehicle-dataset/ElectricCarData_Clean.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 103 entries, 0 to 102
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Brand 103 non-null object
1 Model 103 non-null object
2 AccelSec 103 non-null float64
3 TopSpeed_KmH 103 non-null int64
4 Range_Km 103 non-null int64
5 Efficiency_WhKm 103 non-null int64
6 FastCharge_KmH 103 non-null object
7 RapidCharge 103 non-null object
8 PowerTrain 103 non-null object
9 PlugType 103 non-null object
10 BodyStyle 103 non-null object
11 Segment 103 non-null object
12 Seats 103 non-null int64
13 PriceEuro 103 non-null int64
dtypes: float64(1), int64(5), object(8)
memory usage: 11.4+ KB
<jupyter_text>Examples:
{
"Brand": "Tesla ",
"Model": "Model 3 Long Range Dual Motor",
"AccelSec": 4.6,
"TopSpeed_KmH": 233,
"Range_Km": 450,
"Efficiency_WhKm": 161,
"FastCharge_KmH": 940,
"RapidCharge": "Yes",
"PowerTrain": "AWD",
"PlugType": "Type 2 CCS",
"BodyStyle": "Sedan",
"Segment": "D",
"Seats": 5,
"PriceEuro": 55480
}
{
"Brand": "Volkswagen ",
"Model": "ID.3 Pure",
"AccelSec": 10.0,
"TopSpeed_KmH": 160,
"Range_Km": 270,
"Efficiency_WhKm": 167,
"FastCharge_KmH": 250,
"RapidCharge": "Yes",
"PowerTrain": "RWD",
"PlugType": "Type 2 CCS",
"BodyStyle": "Hatchback",
"Segment": "C",
"Seats": 5,
"PriceEuro": 30000
}
{
"Brand": "Polestar ",
"Model": "2",
"AccelSec": 4.7,
"TopSpeed_KmH": 210,
"Range_Km": 400,
"Efficiency_WhKm": 181,
"FastCharge_KmH": 620,
"RapidCharge": "Yes",
"PowerTrain": "AWD",
"PlugType": "Type 2 CCS",
"BodyStyle": "Liftback",
"Segment": "D",
"Seats": 5,
"PriceEuro": 56440
}
{
"Brand": "BMW ",
"Model": "iX3 ",
"AccelSec": 6.8,
"TopSpeed_KmH": 180,
"Range_Km": 360,
"Efficiency_WhKm": 206,
"FastCharge_KmH": 560,
"RapidCharge": "Yes",
"PowerTrain": "RWD",
"PlugType": "Type 2 CCS",
"BodyStyle": "SUV",
"Segment": "D",
"Seats": 5,
"PriceEuro": 68040
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# An electric vehicle (EV) is a vehicle that uses one or more electric motors for propulsion. It can be powered by a collector system, with electricity from extravehicular sources, or it can be powered autonomously by a battery (sometimes charged by solar panels, or by converting fuel to electricity using fuel cells or a generator). EVs include, but are not limited to, road and rail vehicles, surface and underwater vessels, electric aircraft and electric spacecraft.
# In the 21st century, EVs have seen a resurgence due to technological developments, and an increased focus on renewable energy and the potential reduction of transportation's impact on climate change and other environmental issues. Project Drawdown describes electric vehicles as one of the 100 best contemporary solutions for addressing climate change.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import statsmodels.api as sm
df = pd.read_csv("../input/evs-one-electric-vehicle-dataset/ElectricCarData_Clean.csv")
df.head()
# Finding out the number of null values
df.isnull().sum()
# There exists no null value
# Descriptive Statistics of the dataset
df.describe()
# **Information of the type of data in search column**
#
df.info()
a = np.arange(1, 104)
# Pairplot of all the columns based on Rapid Charger presence
sb.pairplot(df, hue="RapidCharge")
# **Heatmap to show the correlation of the data**
ax = plt.figure(figsize=(15, 8))
sb.heatmap(df.corr(), linewidths=1, linecolor="white", annot=True)
# **Frequency of the Brands in the dataset**
ax = plt.figure(figsize=(20, 5))
plt.grid(axis="y")
plt.title("Brands in the datset")
plt.xlabel("Brand")
plt.ylabel("Frequency")
plt.xticks(rotation=45)
# **Top speeds achieved by the cars of a brand**
ax = plt.figure(figsize=(20, 5))
sb.barplot(x="Brand", y="TopSpeed_KmH", data=df, palette="Paired")
plt.grid(axis="y")
plt.title("Top Speed achieved by a brand")
plt.xlabel("Brand")
plt.ylabel("Top Speed")
plt.xticks(rotation=45)
# **Range a car can achieve**
ax = plt.figure(figsize=(20, 5))
sb.barplot(x="Brand", y="Range_Km", data=df, palette="tab10")
plt.grid(axis="y")
plt.title("Maximum Range achieved by a brand")
plt.xlabel("Brand")
plt.ylabel("Range")
plt.xticks(rotation=45)
# **Number of seats in each car**
ax = plt.figure(figsize=(20, 5))
sb.barplot(x="Brand", y="Seats", data=df, palette="husl")
plt.grid(axis="y")
plt.title("Seats in a car")
plt.xlabel("Brand")
plt.ylabel("Seats")
plt.xticks(rotation=45)
# **Putting independent variables as x and dependent variable as y**
x = df[["AccelSec", "Range_Km", "TopSpeed_KmH", "Efficiency_WhKm"]]
y = df["PriceEuro"]
# **Finding out the linear regression using OLS method**
x = sm.add_constant(x)
results = sm.OLS(y, x)
# **Fitting the model and summarizing**
model = results.fit()
model.summary()
# **model with highest range**
range_df = df.sort_values(by=["Range_Km"], ascending=False)
range_df[["Brand", "Model", "Range_Km"]].head(n=1)
range_df = df.sort_values(by=["Range_Km"], ascending=False)
range_df[["Brand", "Model", "Range_Km"]].head(n=5)
# model with top speed
speed_df = df.sort_values(by=["TopSpeed_KmH"], ascending=False)
speed_df[["Brand", "Model", "TopSpeed_KmH"]].head(n=1)
# number of vehicle produced by each brand
companies = df.groupby("Brand").count()
print(companies["Model"].sort_values(ascending=False))
for column in [i for i in df.columns if df.dtypes[i] == "object"]:
print(column, len(df[column].unique()))
Y = df["Range_Km"]
Y
x = df[["AccelSec", "Range_Km", "TopSpeed_KmH", "Efficiency_WhKm"]]
y = df["PriceEuro"]
# Importing train test split from Scikit Learn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=36
)
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, y_train)
pred = lr.predict(X_test)
# **Finding out the R-squared value**
from sklearn.metrics import r2_score
r2 = r2_score(y_test, pred)
print(r2 * 100)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/987/129987555.ipynb
|
evs-one-electric-vehicle-dataset
|
geoffnel
|
[{"Id": 129987555, "ScriptId": 38667668, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14132496, "CreationDate": "05/18/2023 00:37:35", "VersionNumber": 1.0, "Title": "notebook1292a42426", "EvaluationDate": "05/18/2023", "IsChange": true, "TotalLines": 153.0, "LinesInsertedFromPrevious": 153.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 186434785, "KernelVersionId": 129987555, "SourceDatasetVersionId": 1422244}]
|
[{"Id": 1422244, "DatasetId": 832692, "DatasourceVersionId": 1455545, "CreatorUserId": 3388491, "LicenseName": "CC0: Public Domain", "CreationDate": "08/16/2020 01:19:58", "VersionNumber": 1.0, "Title": "EVs - One Electric Vehicle Dataset - Smaller", "Slug": "evs-one-electric-vehicle-dataset", "Subtitle": "Compare and analyze today's electric cars!", "Description": "**CONTEXT**:\nThis is a dataset of electric vehicles.\n\nOne of the more popular data science datasets is the mtcars dataset. It is known for its simplicity when running analysis and visualizations. \n\nWhen looking for simple datasets on EVs, there don't seem to be any. Also, given the growth in this market, this is something many would be curious about. Hence, the reason for creating this dataset.\n\n For more information, please visit the data source below.\n\n\n**TASKS**:\nSome basic tasks would include\n1. Which car has the fastest 0-100 acceleration?\n2. Which has the highest efficiency?\n3. Does a difference in power train effect the range, top speed, efficiency?\n4. Which manufacturer has the most number of vehicles?\n5. How does price relate to rapid charging?\n\n\n**CONTENT**:\nI've included two datasets below:\n\n1. 'ElectricCarData_Clean.csv' \n-- original pulled data.\n\n2. 'ElectricCarData_Norm.csv' \n-- units removed from each of the rows\n-- rapid charge has a binary yes/no value\n\nThe point of both is to have users practice some data cleaning.\n\n**CREDITS**:\nThere are two credits and sourcing that needs to be mentioned: \n1. *Datasource*: ev-database.org/\n2.*Banner image*: freepik - author - 'macrovector'\n\n\n**UPDATES**:\nThere will be future updates when we can attain additional data.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 832692, "CreatorUserId": 3388491, "OwnerUserId": 3388491.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1422244.0, "CurrentDatasourceVersionId": 1455545.0, "ForumId": 847855, "Type": 2, "CreationDate": "08/16/2020 01:19:58", "LastActivityDate": "08/16/2020", "TotalViews": 85197, "TotalDownloads": 12606, "TotalVotes": 134, "TotalKernels": 30}]
|
[{"Id": 3388491, "UserName": "geoffnel", "DisplayName": "Geoff839", "RegisterDate": "06/24/2019", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# An electric vehicle (EV) is a vehicle that uses one or more electric motors for propulsion. It can be powered by a collector system, with electricity from extravehicular sources, or it can be powered autonomously by a battery (sometimes charged by solar panels, or by converting fuel to electricity using fuel cells or a generator). EVs include, but are not limited to, road and rail vehicles, surface and underwater vessels, electric aircraft and electric spacecraft.
# In the 21st century, EVs have seen a resurgence due to technological developments, and an increased focus on renewable energy and the potential reduction of transportation's impact on climate change and other environmental issues. Project Drawdown describes electric vehicles as one of the 100 best contemporary solutions for addressing climate change.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import statsmodels.api as sm
df = pd.read_csv("../input/evs-one-electric-vehicle-dataset/ElectricCarData_Clean.csv")
df.head()
# Finding out the number of null values
df.isnull().sum()
# There exists no null value
# Descriptive Statistics of the dataset
df.describe()
# **Information of the type of data in search column**
#
df.info()
a = np.arange(1, 104)
# Pairplot of all the columns based on Rapid Charger presence
sb.pairplot(df, hue="RapidCharge")
# **Heatmap to show the correlation of the data**
ax = plt.figure(figsize=(15, 8))
sb.heatmap(df.corr(), linewidths=1, linecolor="white", annot=True)
# **Frequency of the Brands in the dataset**
ax = plt.figure(figsize=(20, 5))
plt.grid(axis="y")
plt.title("Brands in the datset")
plt.xlabel("Brand")
plt.ylabel("Frequency")
plt.xticks(rotation=45)
# **Top speeds achieved by the cars of a brand**
ax = plt.figure(figsize=(20, 5))
sb.barplot(x="Brand", y="TopSpeed_KmH", data=df, palette="Paired")
plt.grid(axis="y")
plt.title("Top Speed achieved by a brand")
plt.xlabel("Brand")
plt.ylabel("Top Speed")
plt.xticks(rotation=45)
# **Range a car can achieve**
ax = plt.figure(figsize=(20, 5))
sb.barplot(x="Brand", y="Range_Km", data=df, palette="tab10")
plt.grid(axis="y")
plt.title("Maximum Range achieved by a brand")
plt.xlabel("Brand")
plt.ylabel("Range")
plt.xticks(rotation=45)
# **Number of seats in each car**
ax = plt.figure(figsize=(20, 5))
sb.barplot(x="Brand", y="Seats", data=df, palette="husl")
plt.grid(axis="y")
plt.title("Seats in a car")
plt.xlabel("Brand")
plt.ylabel("Seats")
plt.xticks(rotation=45)
# **Putting independent variables as x and dependent variable as y**
x = df[["AccelSec", "Range_Km", "TopSpeed_KmH", "Efficiency_WhKm"]]
y = df["PriceEuro"]
# **Finding out the linear regression using OLS method**
x = sm.add_constant(x)
results = sm.OLS(y, x)
# **Fitting the model and summarizing**
model = results.fit()
model.summary()
# **model with highest range**
range_df = df.sort_values(by=["Range_Km"], ascending=False)
range_df[["Brand", "Model", "Range_Km"]].head(n=1)
range_df = df.sort_values(by=["Range_Km"], ascending=False)
range_df[["Brand", "Model", "Range_Km"]].head(n=5)
# model with top speed
speed_df = df.sort_values(by=["TopSpeed_KmH"], ascending=False)
speed_df[["Brand", "Model", "TopSpeed_KmH"]].head(n=1)
# number of vehicle produced by each brand
companies = df.groupby("Brand").count()
print(companies["Model"].sort_values(ascending=False))
for column in [i for i in df.columns if df.dtypes[i] == "object"]:
print(column, len(df[column].unique()))
Y = df["Range_Km"]
Y
x = df[["AccelSec", "Range_Km", "TopSpeed_KmH", "Efficiency_WhKm"]]
y = df["PriceEuro"]
# Importing train test split from Scikit Learn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=36
)
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, y_train)
pred = lr.predict(X_test)
# **Finding out the R-squared value**
from sklearn.metrics import r2_score
r2 = r2_score(y_test, pred)
print(r2 * 100)
|
[{"evs-one-electric-vehicle-dataset/ElectricCarData_Clean.csv": {"column_names": "[\"Brand\", \"Model\", \"AccelSec\", \"TopSpeed_KmH\", \"Range_Km\", \"Efficiency_WhKm\", \"FastCharge_KmH\", \"RapidCharge\", \"PowerTrain\", \"PlugType\", \"BodyStyle\", \"Segment\", \"Seats\", \"PriceEuro\"]", "column_data_types": "{\"Brand\": \"object\", \"Model\": \"object\", \"AccelSec\": \"float64\", \"TopSpeed_KmH\": \"int64\", \"Range_Km\": \"int64\", \"Efficiency_WhKm\": \"int64\", \"FastCharge_KmH\": \"object\", \"RapidCharge\": \"object\", \"PowerTrain\": \"object\", \"PlugType\": \"object\", \"BodyStyle\": \"object\", \"Segment\": \"object\", \"Seats\": \"int64\", \"PriceEuro\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 103 entries, 0 to 102\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Brand 103 non-null object \n 1 Model 103 non-null object \n 2 AccelSec 103 non-null float64\n 3 TopSpeed_KmH 103 non-null int64 \n 4 Range_Km 103 non-null int64 \n 5 Efficiency_WhKm 103 non-null int64 \n 6 FastCharge_KmH 103 non-null object \n 7 RapidCharge 103 non-null object \n 8 PowerTrain 103 non-null object \n 9 PlugType 103 non-null object \n 10 BodyStyle 103 non-null object \n 11 Segment 103 non-null object \n 12 Seats 103 non-null int64 \n 13 PriceEuro 103 non-null int64 \ndtypes: float64(1), int64(5), object(8)\nmemory usage: 11.4+ KB\n", "summary": "{\"AccelSec\": {\"count\": 103.0, \"mean\": 7.39611650485437, \"std\": 3.0174304849311087, \"min\": 2.1, \"25%\": 5.1, \"50%\": 7.3, \"75%\": 9.0, \"max\": 22.4}, \"TopSpeed_KmH\": {\"count\": 103.0, \"mean\": 179.19417475728156, \"std\": 43.573030481499785, \"min\": 123.0, \"25%\": 150.0, \"50%\": 160.0, \"75%\": 200.0, \"max\": 410.0}, \"Range_Km\": {\"count\": 103.0, \"mean\": 338.7864077669903, \"std\": 126.0144444323618, \"min\": 95.0, \"25%\": 250.0, \"50%\": 340.0, \"75%\": 400.0, \"max\": 970.0}, \"Efficiency_WhKm\": {\"count\": 103.0, \"mean\": 189.16504854368932, \"std\": 29.566839230892835, \"min\": 104.0, \"25%\": 168.0, \"50%\": 180.0, \"75%\": 203.0, \"max\": 273.0}, \"Seats\": {\"count\": 103.0, \"mean\": 4.883495145631068, \"std\": 0.7958343860843434, \"min\": 2.0, \"25%\": 5.0, \"50%\": 5.0, \"75%\": 5.0, \"max\": 7.0}, \"PriceEuro\": {\"count\": 103.0, \"mean\": 55811.563106796115, \"std\": 34134.665280290195, \"min\": 20129.0, \"25%\": 34429.5, \"50%\": 45000.0, \"75%\": 65000.0, \"max\": 215000.0}}", "examples": "{\"Brand\":{\"0\":\"Tesla \",\"1\":\"Volkswagen \",\"2\":\"Polestar \",\"3\":\"BMW \"},\"Model\":{\"0\":\"Model 3 Long Range Dual Motor\",\"1\":\"ID.3 Pure\",\"2\":\"2\",\"3\":\"iX3 \"},\"AccelSec\":{\"0\":4.6,\"1\":10.0,\"2\":4.7,\"3\":6.8},\"TopSpeed_KmH\":{\"0\":233,\"1\":160,\"2\":210,\"3\":180},\"Range_Km\":{\"0\":450,\"1\":270,\"2\":400,\"3\":360},\"Efficiency_WhKm\":{\"0\":161,\"1\":167,\"2\":181,\"3\":206},\"FastCharge_KmH\":{\"0\":\"940\",\"1\":\"250\",\"2\":\"620\",\"3\":\"560\"},\"RapidCharge\":{\"0\":\"Yes\",\"1\":\"Yes\",\"2\":\"Yes\",\"3\":\"Yes\"},\"PowerTrain\":{\"0\":\"AWD\",\"1\":\"RWD\",\"2\":\"AWD\",\"3\":\"RWD\"},\"PlugType\":{\"0\":\"Type 2 CCS\",\"1\":\"Type 2 CCS\",\"2\":\"Type 2 CCS\",\"3\":\"Type 2 CCS\"},\"BodyStyle\":{\"0\":\"Sedan\",\"1\":\"Hatchback\",\"2\":\"Liftback\",\"3\":\"SUV\"},\"Segment\":{\"0\":\"D\",\"1\":\"C\",\"2\":\"D\",\"3\":\"D\"},\"Seats\":{\"0\":5,\"1\":5,\"2\":5,\"3\":5},\"PriceEuro\":{\"0\":55480,\"1\":30000,\"2\":56440,\"3\":68040}}"}}]
| true | 1 |
<start_data_description><data_path>evs-one-electric-vehicle-dataset/ElectricCarData_Clean.csv:
<column_names>
['Brand', 'Model', 'AccelSec', 'TopSpeed_KmH', 'Range_Km', 'Efficiency_WhKm', 'FastCharge_KmH', 'RapidCharge', 'PowerTrain', 'PlugType', 'BodyStyle', 'Segment', 'Seats', 'PriceEuro']
<column_types>
{'Brand': 'object', 'Model': 'object', 'AccelSec': 'float64', 'TopSpeed_KmH': 'int64', 'Range_Km': 'int64', 'Efficiency_WhKm': 'int64', 'FastCharge_KmH': 'object', 'RapidCharge': 'object', 'PowerTrain': 'object', 'PlugType': 'object', 'BodyStyle': 'object', 'Segment': 'object', 'Seats': 'int64', 'PriceEuro': 'int64'}
<dataframe_Summary>
{'AccelSec': {'count': 103.0, 'mean': 7.39611650485437, 'std': 3.0174304849311087, 'min': 2.1, '25%': 5.1, '50%': 7.3, '75%': 9.0, 'max': 22.4}, 'TopSpeed_KmH': {'count': 103.0, 'mean': 179.19417475728156, 'std': 43.573030481499785, 'min': 123.0, '25%': 150.0, '50%': 160.0, '75%': 200.0, 'max': 410.0}, 'Range_Km': {'count': 103.0, 'mean': 338.7864077669903, 'std': 126.0144444323618, 'min': 95.0, '25%': 250.0, '50%': 340.0, '75%': 400.0, 'max': 970.0}, 'Efficiency_WhKm': {'count': 103.0, 'mean': 189.16504854368932, 'std': 29.566839230892835, 'min': 104.0, '25%': 168.0, '50%': 180.0, '75%': 203.0, 'max': 273.0}, 'Seats': {'count': 103.0, 'mean': 4.883495145631068, 'std': 0.7958343860843434, 'min': 2.0, '25%': 5.0, '50%': 5.0, '75%': 5.0, 'max': 7.0}, 'PriceEuro': {'count': 103.0, 'mean': 55811.563106796115, 'std': 34134.665280290195, 'min': 20129.0, '25%': 34429.5, '50%': 45000.0, '75%': 65000.0, 'max': 215000.0}}
<dataframe_info>
RangeIndex: 103 entries, 0 to 102
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Brand 103 non-null object
1 Model 103 non-null object
2 AccelSec 103 non-null float64
3 TopSpeed_KmH 103 non-null int64
4 Range_Km 103 non-null int64
5 Efficiency_WhKm 103 non-null int64
6 FastCharge_KmH 103 non-null object
7 RapidCharge 103 non-null object
8 PowerTrain 103 non-null object
9 PlugType 103 non-null object
10 BodyStyle 103 non-null object
11 Segment 103 non-null object
12 Seats 103 non-null int64
13 PriceEuro 103 non-null int64
dtypes: float64(1), int64(5), object(8)
memory usage: 11.4+ KB
<some_examples>
{'Brand': {'0': 'Tesla ', '1': 'Volkswagen ', '2': 'Polestar ', '3': 'BMW '}, 'Model': {'0': 'Model 3 Long Range Dual Motor', '1': 'ID.3 Pure', '2': '2', '3': 'iX3 '}, 'AccelSec': {'0': 4.6, '1': 10.0, '2': 4.7, '3': 6.8}, 'TopSpeed_KmH': {'0': 233, '1': 160, '2': 210, '3': 180}, 'Range_Km': {'0': 450, '1': 270, '2': 400, '3': 360}, 'Efficiency_WhKm': {'0': 161, '1': 167, '2': 181, '3': 206}, 'FastCharge_KmH': {'0': '940', '1': '250', '2': '620', '3': '560'}, 'RapidCharge': {'0': 'Yes', '1': 'Yes', '2': 'Yes', '3': 'Yes'}, 'PowerTrain': {'0': 'AWD', '1': 'RWD', '2': 'AWD', '3': 'RWD'}, 'PlugType': {'0': 'Type 2 CCS', '1': 'Type 2 CCS', '2': 'Type 2 CCS', '3': 'Type 2 CCS'}, 'BodyStyle': {'0': 'Sedan', '1': 'Hatchback', '2': 'Liftback', '3': 'SUV'}, 'Segment': {'0': 'D', '1': 'C', '2': 'D', '3': 'D'}, 'Seats': {'0': 5, '1': 5, '2': 5, '3': 5}, 'PriceEuro': {'0': 55480, '1': 30000, '2': 56440, '3': 68040}}
<end_description>
| 1,503 | 2 | 2,822 | 1,503 |
129449779
|
<jupyter_start><jupyter_text>SQL Murder Mystery Database
There's been a Murder in SQL City! The SQL Murder Mystery is designed to be both a self-directed lesson to learn SQL concepts and commands and a fun game for experienced SQL users to solve an intriguing crime.
A crime has taken place and the detective needs your help. The detective gave you the crime scene report, but you somehow lost it. You vaguely remember that the crime was a murder that occurred sometime on Jan.15, 2018 and that it took place in SQL City. Start by retrieving the corresponding crime scene report from the police department’s database.
Kaggle dataset identifier: sql-murder-mystery-database
<jupyter_script># 
# A crime has taken place and the detective needs your help. The detective gave you the crime scene report, but you somehow lost it. You vaguely remember that the crime was a murder that occurred sometime on Jan.15, 2018 and that it took place in SQL City. Start by retrieving the corresponding crime scene report from the police department’s database.uely remember that the crime was a murder that occurred sometime on Jan.15, 2018 and that it took place in SQL City. Start by retrieving the corresponding crime scene report from the police department’s database.
# Below is the schema we'll be using:
# 
# import libraries
import sqlite3 as sql # run queries on relational database
import pandas as pd # data processing
# connect to the SQL Murder Mystery Database
conn = sql.connect("/kaggle/input/sql-murder-mystery-database/sql-murder-mystery.db")
# set up the column width to take up as much space as it needs to. This prevents cutting off text when displaying the data
pd.set_option("display.max_colwidth", 0)
# Remember that what we know is:
# The crime was a **murder** that occurred sometime on **Jan.15, 2018** and that it took place in **SQL City**.
# pull the crime scene report
query_1 = """
SELECT *
FROM crime_scene_report
WHERE
date = '20180115'
AND city = 'SQL City'
AND type = 'murder'
"""
# read the query
crime_scene_report = pd.read_sql_query(query_1, conn)
# display the query aka the crime scene report
crime_scene_report
# We have some leads on the two witnesses. Let's find out who they are.
# find witness 1, who we know lives at the last house on "Northwestern Dr"
query_2 = """
SELECT *
FROM person
WHERE
address_street_name = "Northwestern Dr"
ORDER BY address_number DESC
LIMIT 1
"""
# read the query
witness_1 = pd.read_sql_query(query_2, conn)
# display the query aka witness 1's identity
witness_1
# We have identified witness 1 as Monty Schapiro. Let's move on to witness 2.
# find witness 2, who we know is named Annabel and lives somewhere on "Franklin Ave"
query_3 = """
SELECT *
FROM person
WHERE
address_street_name = "Franklin Ave"
AND name LIKE "%Annabel%"
"""
# read the query
witness_2 = pd.read_sql_query(query_3, conn)
# display the query aka witness 2's identity
witness_2
# We have identified witness 2 as Annabel Miller. Now that we know who they are, let's take a look at their statements.
# read the witness statements
query_4 = """
SELECT p.name, i.*
FROM interview as i, person as p
WHERE
person_id IN (14887, 16371)
AND i.person_id = p.id
"""
# read the query
witness_statements = pd.read_sql_query(query_4, conn)
# display the query aka the witness statements
witness_statements
# We have a few clues from the witnesses to finally get to a suspect list. Which Get Fit member has (1) a membership number that starts with “48Z”, (2) a car plate with “H42W” and (3) checked into the gym on Jan 9?
# find the suspect(s)
query_5 = """
SELECT p.name, dl.plate_number, gfm.id, gfc.check_in_date
FROM person as p, drivers_license as dl, get_fit_now_member as gfm, get_fit_now_check_in as gfc
ON gfm.person_id = p.id
AND gfm.id = gfc.membership_id
AND p.license_id = dl.id
WHERE gfc.membership_id LIKE "%48Z%"
AND dl.plate_number LIKE "%H42W%"
AND gfc.check_in_date = '20180109'
"""
# read the query
suspects = pd.read_sql_query(query_5, conn)
# display the query aka the suspect list
suspects
# Looks like there's only one suspect matching our clues! Let's see what Jeremy Bowers has to say.
# get the suspect statement
query_6 = """
SELECT p.name, i.*
FROM person as p, interview as i
ON p.id = i.person_id
WHERE p.name = "Jeremy Bowers"
"""
# read the query
suspect_statement = pd.read_sql_query(query_6, conn)
# display the query aka the suspect statement
suspect_statement
# Turns out Jeremy was a hired hitman...who is the mystery woman behind the murder? Let's start by identifying all women who have red hair, has a Tesla Model S, and is between 65" and 67" in height.
# identify all women who have red hair, has a Tesla Model S, and is between 65" and 67" in height
query_7 = """
SELECT *
FROM drivers_license
WHERE
gender = "female" AND
hair_color = "red" AND
car_make = "Tesla" AND
car_model = "Model S"AND
height BETWEEN 65 AND 67
"""
# read the query
new_suspects = pd.read_sql_query(query_7, conn)
# display the query aka the new suspects
new_suspects
# We have 3 suspects for our mystery woman. We know that our mystery woman attended the SQL Symphony Concert 3 times in Dec 2017, so let's see if she checked into the Facebook event.
# of the identified women, find who checked into a Facebook event for SQL Symphony Concert
query_8 = """
SELECT p.name, fb.*
FROM facebook_event_checkin as fb, person as p, drivers_license as dl
ON
fb.person_id = p.id AND
p.license_id = dl.id
WHERE p.license_id IN (202298,291182,918773)
"""
# read the query
mystery_woman = pd.read_sql_query(query_8, conn)
# display the query aka the new suspects
mystery_woman
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/449/129449779.ipynb
|
sql-murder-mystery-database
|
johnp47
|
[{"Id": 129449779, "ScriptId": 38489830, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14288627, "CreationDate": "05/13/2023 23:40:00", "VersionNumber": 2.0, "Title": "SQL Murder Mystery - Solution Walkthrough", "EvaluationDate": "05/13/2023", "IsChange": false, "TotalLines": 166.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 166.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185510459, "KernelVersionId": 129449779, "SourceDatasetVersionId": 3833395}]
|
[{"Id": 3833395, "DatasetId": 2282161, "DatasourceVersionId": 3888216, "CreatorUserId": 7822593, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "06/20/2022 06:16:52", "VersionNumber": 1.0, "Title": "SQL Murder Mystery Database", "Slug": "sql-murder-mystery-database", "Subtitle": "There's been a Murder in SQL City!", "Description": "There's been a Murder in SQL City! The SQL Murder Mystery is designed to be both a self-directed lesson to learn SQL concepts and commands and a fun game for experienced SQL users to solve an intriguing crime.\n\nA crime has taken place and the detective needs your help. The detective gave you the crime scene report, but you somehow lost it. You vaguely remember that the crime was a \u200bmurder\u200b that occurred sometime on \u200bJan.15, 2018\u200b and that it took place in \u200bSQL City\u200b. Start by retrieving the corresponding crime scene report from the police department\u2019s database.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2282161, "CreatorUserId": 7822593, "OwnerUserId": 7822593.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3833395.0, "CurrentDatasourceVersionId": 3888216.0, "ForumId": 2308721, "Type": 2, "CreationDate": "06/20/2022 06:16:52", "LastActivityDate": "06/20/2022", "TotalViews": 9742, "TotalDownloads": 1296, "TotalVotes": 36, "TotalKernels": 10}]
|
[{"Id": 7822593, "UserName": "johnp47", "DisplayName": "John", "RegisterDate": "07/02/2021", "PerformanceTier": 1}]
|
# 
# A crime has taken place and the detective needs your help. The detective gave you the crime scene report, but you somehow lost it. You vaguely remember that the crime was a murder that occurred sometime on Jan.15, 2018 and that it took place in SQL City. Start by retrieving the corresponding crime scene report from the police department’s database.uely remember that the crime was a murder that occurred sometime on Jan.15, 2018 and that it took place in SQL City. Start by retrieving the corresponding crime scene report from the police department’s database.
# Below is the schema we'll be using:
# 
# import libraries
import sqlite3 as sql # run queries on relational database
import pandas as pd # data processing
# connect to the SQL Murder Mystery Database
conn = sql.connect("/kaggle/input/sql-murder-mystery-database/sql-murder-mystery.db")
# set up the column width to take up as much space as it needs to. This prevents cutting off text when displaying the data
pd.set_option("display.max_colwidth", 0)
# Remember that what we know is:
# The crime was a **murder** that occurred sometime on **Jan.15, 2018** and that it took place in **SQL City**.
# pull the crime scene report
query_1 = """
SELECT *
FROM crime_scene_report
WHERE
date = '20180115'
AND city = 'SQL City'
AND type = 'murder'
"""
# read the query
crime_scene_report = pd.read_sql_query(query_1, conn)
# display the query aka the crime scene report
crime_scene_report
# We have some leads on the two witnesses. Let's find out who they are.
# find witness 1, who we know lives at the last house on "Northwestern Dr"
query_2 = """
SELECT *
FROM person
WHERE
address_street_name = "Northwestern Dr"
ORDER BY address_number DESC
LIMIT 1
"""
# read the query
witness_1 = pd.read_sql_query(query_2, conn)
# display the query aka witness 1's identity
witness_1
# We have identified witness 1 as Monty Schapiro. Let's move on to witness 2.
# find witness 2, who we know is named Annabel and lives somewhere on "Franklin Ave"
query_3 = """
SELECT *
FROM person
WHERE
address_street_name = "Franklin Ave"
AND name LIKE "%Annabel%"
"""
# read the query
witness_2 = pd.read_sql_query(query_3, conn)
# display the query aka witness 2's identity
witness_2
# We have identified witness 2 as Annabel Miller. Now that we know who they are, let's take a look at their statements.
# read the witness statements
query_4 = """
SELECT p.name, i.*
FROM interview as i, person as p
WHERE
person_id IN (14887, 16371)
AND i.person_id = p.id
"""
# read the query
witness_statements = pd.read_sql_query(query_4, conn)
# display the query aka the witness statements
witness_statements
# We have a few clues from the witnesses to finally get to a suspect list. Which Get Fit member has (1) a membership number that starts with “48Z”, (2) a car plate with “H42W” and (3) checked into the gym on Jan 9?
# find the suspect(s)
query_5 = """
SELECT p.name, dl.plate_number, gfm.id, gfc.check_in_date
FROM person as p, drivers_license as dl, get_fit_now_member as gfm, get_fit_now_check_in as gfc
ON gfm.person_id = p.id
AND gfm.id = gfc.membership_id
AND p.license_id = dl.id
WHERE gfc.membership_id LIKE "%48Z%"
AND dl.plate_number LIKE "%H42W%"
AND gfc.check_in_date = '20180109'
"""
# read the query
suspects = pd.read_sql_query(query_5, conn)
# display the query aka the suspect list
suspects
# Looks like there's only one suspect matching our clues! Let's see what Jeremy Bowers has to say.
# get the suspect statement
query_6 = """
SELECT p.name, i.*
FROM person as p, interview as i
ON p.id = i.person_id
WHERE p.name = "Jeremy Bowers"
"""
# read the query
suspect_statement = pd.read_sql_query(query_6, conn)
# display the query aka the suspect statement
suspect_statement
# Turns out Jeremy was a hired hitman...who is the mystery woman behind the murder? Let's start by identifying all women who have red hair, has a Tesla Model S, and is between 65" and 67" in height.
# identify all women who have red hair, has a Tesla Model S, and is between 65" and 67" in height
query_7 = """
SELECT *
FROM drivers_license
WHERE
gender = "female" AND
hair_color = "red" AND
car_make = "Tesla" AND
car_model = "Model S"AND
height BETWEEN 65 AND 67
"""
# read the query
new_suspects = pd.read_sql_query(query_7, conn)
# display the query aka the new suspects
new_suspects
# We have 3 suspects for our mystery woman. We know that our mystery woman attended the SQL Symphony Concert 3 times in Dec 2017, so let's see if she checked into the Facebook event.
# of the identified women, find who checked into a Facebook event for SQL Symphony Concert
query_8 = """
SELECT p.name, fb.*
FROM facebook_event_checkin as fb, person as p, drivers_license as dl
ON
fb.person_id = p.id AND
p.license_id = dl.id
WHERE p.license_id IN (202298,291182,918773)
"""
# read the query
mystery_woman = pd.read_sql_query(query_8, conn)
# display the query aka the new suspects
mystery_woman
| false | 0 | 1,691 | 0 | 1,872 | 1,691 |
||
129813774
|
<jupyter_start><jupyter_text>Amazon,Google,Microsoft,Apple stock price(2013-18)
A combination of 4 Datasets in CSV formats containing files of big Tech companies stock prices for a span of Five years (2013 - 2018).
Do check out my notebook as well your upvote may help me a lot to reach Grandmaster:
https://www.kaggle.com/code/darshanprabhu09/tech-titans-in-tandem-exploring-the-time-series
APPLE STOCK (AAPL) :
DATE : Date of the stock where data was recorded .
OPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.
HIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
LOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
CLOSE : Closing value of stock
VOLUME : the total number of shares traded in a specified time frame.
AMAZON STOCK (AMZN.csv)
DATE : Date of the stock where data was recorded .
OPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.
HIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
LOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
CLOSE : Closing value of stock
VOLUME : the total number of shares traded in a specified time frame.
Google stock : (Googl.csv) :
DATE : Date of the stock where data was recorded .
OPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.
HIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
LOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
CLOSE : Closing value of stock
VOLUME : the total number of shares traded in a specified time frame.
MICROSOFT STOCK (MSFT_data.csv) :
DATE : Date of the stock where data was recorded .
OPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.
HIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
LOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.
CLOSE : Closing value of stock
VOLUME : the total number of shares traded in a specified time frame.
Do upvote the dataset so it can reach further kagglers
Kaggle dataset identifier: stock-prices-for
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
### so that u dont have warnings
from warnings import filterwarnings
filterwarnings("ignore")
path = "/kaggle/input/stock-prices-for"
# setting the path to locatioon of file for reading all dataset into just a single list.
companies = [
"/kaggle/input/stock-prices-for/AAPL_data.csv ", # apple dataset
"/kaggle/input/stock-prices-for/AMZN_data.csv", # amazon stock data
"/kaggle/input/stock-prices-for/GOOG_data.csv", # google stock data
"/kaggle/input/stock-prices-for/MSFT_data.csv",
] # Microsoft stock data
# We imported all the companies into a single list known as comapnies
# blank dataframe
all_data = pd.DataFrame()
for file in company_list:
current_df = pd.read_csv(path + "/" + file)
all_data = pd.concat(
[all_data, current_df]
) # Concatinating or joining every dataset into a table format.
all_data.shape
all_data.head()
all_data.dtypes # type of data in each columns
all_data["date"] == pd.to_datetime(
all_data["date"]
) # converting the data to proper date using to_datetime function.
all_data.date[0] # verifying values/
all_data.columns # printing all the columns
# # (1.) Visualizing the Closing price of all the stocks.
tech_list = all_data["Name"].unique() # retrieving all unique values or name of stocks
plt.figure(figsize=(19, 25))
for i, company in enumerate(tech_list, 1):
plt.subplot(2, 2, i)
df = all_data[all_data["Name"] == company]
plt.plot(df["date"], df["close"])
plt.xlabel("Date")
plt.ylabel("Closing prices")
plt.title("Closing price of stocks as per as time")
# # (2.) Analysis of the amount of volume been traded everyday..
plt.figure(figsize=(25, 15))
for i, company in enumerate(tech_list, 1):
plt.subplot(2, 2, i)
df = all_data[all_data["Name"] == company]
plt.plot(df["date"], df["volume"])
plt.xlabel("Date")
plt.ylabel("Volume")
plt.title("Volume of stocks as per as time")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/813/129813774.ipynb
|
stock-prices-for
| null |
[{"Id": 129813774, "ScriptId": 38605320, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12830464, "CreationDate": "05/16/2023 16:41:47", "VersionNumber": 1.0, "Title": "Time series Analysis on Amazon stocks.", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 70.0, "LinesInsertedFromPrevious": 70.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186188935, "KernelVersionId": 129813774, "SourceDatasetVersionId": 5699596}]
|
[{"Id": 5699596, "DatasetId": 3277373, "DatasourceVersionId": 5775255, "CreatorUserId": 12830464, "LicenseName": "Other (specified in description)", "CreationDate": "05/16/2023 15:17:16", "VersionNumber": 1.0, "Title": "Amazon,Google,Microsoft,Apple stock price(2013-18)", "Slug": "stock-prices-for", "Subtitle": "Big Tech companies stock prices.", "Description": "A combination of 4 Datasets in CSV formats containing files of big Tech companies stock prices for a span of Five years (2013 - 2018).\n\nDo check out my notebook as well your upvote may help me a lot to reach Grandmaster: \n\nhttps://www.kaggle.com/code/darshanprabhu09/tech-titans-in-tandem-exploring-the-time-series\n\nAPPLE STOCK (AAPL) : \n\nDATE : Date of the stock where data was recorded . \n\nOPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.\n\nHIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nLOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nCLOSE : Closing value of stock\n\nVOLUME : the total number of shares traded in a specified time frame. \n\nAMAZON STOCK (AMZN.csv) \n\n\nDATE : Date of the stock where data was recorded . \n\nOPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.\n\nHIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nLOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nCLOSE : Closing value of stock\n\nVOLUME : the total number of shares traded in a specified time frame. \n\nGoogle stock : (Googl.csv) : \n\nDATE : Date of the stock where data was recorded . \n\nOPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.\n\nHIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nLOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nCLOSE : Closing value of stock\n\nVOLUME : the total number of shares traded in a specified time frame. \n\nMICROSOFT STOCK (MSFT_data.csv) : \n\nDATE : Date of the stock where data was recorded . \n\nOPEN : the amount and value of materials that a company has available for sale or use at the beginning of an accounting period.\n\nHIGH : the highest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nLOW : the lowest price at which a stock traded during the course of the trading day and is typically higher than the closing or equal to the opening price.\n\nCLOSE : Closing value of stock\n\nVOLUME : the total number of shares traded in a specified time frame. \n\n\nDo upvote the dataset so it can reach further kagglers", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3277373, "CreatorUserId": 12830464, "OwnerUserId": 12830464.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 6044498.0, "CurrentDatasourceVersionId": 6122684.0, "ForumId": 3343062, "Type": 2, "CreationDate": "05/16/2023 15:17:16", "LastActivityDate": "05/16/2023", "TotalViews": 12565, "TotalDownloads": 2882, "TotalVotes": 77, "TotalKernels": 5}]
| null |
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
### so that u dont have warnings
from warnings import filterwarnings
filterwarnings("ignore")
path = "/kaggle/input/stock-prices-for"
# setting the path to locatioon of file for reading all dataset into just a single list.
companies = [
"/kaggle/input/stock-prices-for/AAPL_data.csv ", # apple dataset
"/kaggle/input/stock-prices-for/AMZN_data.csv", # amazon stock data
"/kaggle/input/stock-prices-for/GOOG_data.csv", # google stock data
"/kaggle/input/stock-prices-for/MSFT_data.csv",
] # Microsoft stock data
# We imported all the companies into a single list known as comapnies
# blank dataframe
all_data = pd.DataFrame()
for file in company_list:
current_df = pd.read_csv(path + "/" + file)
all_data = pd.concat(
[all_data, current_df]
) # Concatinating or joining every dataset into a table format.
all_data.shape
all_data.head()
all_data.dtypes # type of data in each columns
all_data["date"] == pd.to_datetime(
all_data["date"]
) # converting the data to proper date using to_datetime function.
all_data.date[0] # verifying values/
all_data.columns # printing all the columns
# # (1.) Visualizing the Closing price of all the stocks.
tech_list = all_data["Name"].unique() # retrieving all unique values or name of stocks
plt.figure(figsize=(19, 25))
for i, company in enumerate(tech_list, 1):
plt.subplot(2, 2, i)
df = all_data[all_data["Name"] == company]
plt.plot(df["date"], df["close"])
plt.xlabel("Date")
plt.ylabel("Closing prices")
plt.title("Closing price of stocks as per as time")
# # (2.) Analysis of the amount of volume been traded everyday..
plt.figure(figsize=(25, 15))
for i, company in enumerate(tech_list, 1):
plt.subplot(2, 2, i)
df = all_data[all_data["Name"] == company]
plt.plot(df["date"], df["volume"])
plt.xlabel("Date")
plt.ylabel("Volume")
plt.title("Volume of stocks as per as time")
| false | 0 | 627 | 0 | 1,380 | 627 |
||
129813621
|
<jupyter_start><jupyter_text>CIFAKE: Real and AI-Generated Synthetic Images
# CIFAKE: Real and AI-Generated Synthetic Images
The quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.
CIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?
Further information on this dataset can be found here: [Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)

## Dataset details
The dataset contains two classes - REAL and FAKE.
For REAL, we collected the images from Krizhevsky & Hinton's [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)
For the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4
There are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)
## Papers with Code
The dataset and all studies using it are linked using [Papers with Code](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)
[https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)
## References
If you use this dataset, you **must** cite the following sources
[Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdfl)
[Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)
Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2023). The Bird & Lotfi study is a preprint currently available on [ArXiv](https://arxiv.org/abs/2303.14126) and this description will be updated when the paper is published.
## Notes
The updates to the dataset on the 28th of March 2023 did not change anything; the file formats ".jpeg" were renamed ".jpg" and the root folder was uploaded to meet Kaggle's usability requirements.
## License
This dataset is published under the [same MIT license as CIFAR-10](https://github.com/wichtounet/cifar-10/blob/master/LICENSE):
*Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:*
*The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.*
*THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*
Kaggle dataset identifier: cifake-real-and-ai-generated-synthetic-images
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import random
import shutil
# Set the paths to your dataset folders
dataset_dir = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/train"
real_dir = os.path.join(dataset_dir, "REAL")
fake_dir = os.path.join(dataset_dir, "FAKE")
# Set the paths to the new directories that will contain the selected images
train_dir = "/kaggle/working/train"
real_train_dir = os.path.join(train_dir, "REAL")
fake_train_dir = os.path.join(train_dir, "FAKE")
# Create the new directories if they don't exist
if not os.path.exists(real_train_dir):
os.makedirs(real_train_dir)
if not os.path.exists(fake_train_dir):
os.makedirs(fake_train_dir)
# Set the number of images to select from each folder
num_images = 2000
# Randomly select the required number of images from the REAL folder and copy them to the new directory
real_images = os.listdir(real_dir)
selected_real_images = random.sample(real_images, num_images)
for image_name in selected_real_images:
source_path = os.path.join(real_dir, image_name)
dest_path = os.path.join(real_train_dir, image_name)
shutil.copyfile(source_path, dest_path)
# Randomly select the required number of images from the FAKE folder and copy them to the new directory
fake_images = os.listdir(fake_dir)
selected_fake_images = random.sample(fake_images, num_images)
for image_name in selected_fake_images:
source_path = os.path.join(fake_dir, image_name)
dest_path = os.path.join(fake_train_dir, image_name)
shutil.copyfile(source_path, dest_path)
# Set the paths to your dataset folders
dataset_dir_test = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/test"
real_dir = os.path.join(dataset_dir_test, "REAL")
fake_dir = os.path.join(dataset_dir_test, "FAKE")
# Set the paths to the new directories that will contain the selected images
test_dir = "/kaggle/working/test"
real_test_dir = os.path.join(test_dir, "REAL")
fake_test_dir = os.path.join(test_dir, "FAKE")
# Create the new directories if they don't exist
if not os.path.exists(real_test_dir):
os.makedirs(real_test_dir)
if not os.path.exists(fake_test_dir):
os.makedirs(fake_test_dir)
# Set the number of images to select from each folder
num_images = 200
# Randomly select the required number of images from the REAL folder and copy them to the new directory
real_images = os.listdir(real_dir)
selected_real_images = random.sample(real_images, num_images)
for image_name in selected_real_images:
source_path = os.path.join(real_dir, image_name)
dest_path = os.path.join(real_test_dir, image_name)
shutil.copyfile(source_path, dest_path)
# Randomly select the required number of images from the FAKE folder and copy them to the new directory
fake_images = os.listdir(fake_dir)
selected_fake_images = random.sample(fake_images, num_images)
for image_name in selected_fake_images:
source_path = os.path.join(fake_dir, image_name)
dest_path = os.path.join(fake_test_dir, image_name)
shutil.copyfile(source_path, dest_path)
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
import os
import cv2
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import average_precision_score
import matplotlib.pyplot as plt
# Set the paths to the train and test directories
train_dir = "/kaggle/working/train"
test_dir = "/kaggle/working/test"
# Set up the model
base_model = VGG16(weights="imagenet", include_top=False, input_shape=(32, 32, 3))
for layer in base_model.layers:
layer.trainable = False
model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))
batch_size = 32
# Compile the model
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
# Perform data augmentation
train_datagen = ImageDataGenerator(
rescale=1.0 / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True
)
# Load the training data
train_generator = train_datagen.flow_from_directory(
train_dir, target_size=(32, 32), batch_size=batch_size, class_mode="binary"
)
# Train the model
history = model.fit(
train_generator, steps_per_epoch=train_generator.n // batch_size, epochs=50
)
# Load the test data
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(32, 32),
batch_size=batch_size,
class_mode="binary",
shuffle=False,
)
# Make predictions on the test data
predictions = model.predict(test_generator)
labels = [0 if pred < 0.5 else 1 for pred in predictions]
# Convert labels to 'FAKE' and 'REAL'
# labels = ['FAKE' if label == 0 else 'REAL' for label in labels]
# Calculate accuracy
accuracy = np.sum(np.array(test_generator.labels) == np.array(labels)) / len(labels)
# Print the accuracy
print("\nAccuracy:", accuracy)
cm = confusion_matrix(test_generator.labels, labels)
print("\nConfusion Matrix:")
print(cm)
# Compute the classification report
class_names = test_generator.class_indices.keys()
classification_rep = classification_report(
test_generator.labels, labels, target_names=class_names
)
print("\nClassification Report:")
print(classification_rep)
# Calculate the average precision (mAP)
mAP = average_precision_score(test_generator.labels, predictions)
print("\nMean Average Precision (mAP):", mAP)
import matplotlib.pyplot as plt
import seaborn as sns
# Confusion matrix
cm = confusion_matrix(test_generator.labels, labels)
plt.figure(figsize=(8, 6))
sns.heatmap(
cm,
annot=True,
cmap="Blues",
fmt="d",
xticklabels=class_names,
yticklabels=class_names,
)
plt.xlabel("Predicted Labels")
plt.ylabel("True Labels")
plt.title("Confusion Matrix")
plt.show()
# Loss plot
plt.figure(figsize=(8, 6))
plt.plot(history.history["loss"], label="Training Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Training Loss")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve
# Calculate precision and recall
precision, recall, _ = precision_recall_curve(test_generator.labels, predictions)
# Plot precision-recall curve
plt.plot(recall, precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Precision-Recall Curve")
plt.grid(True)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve, auc
# Calculate precision, recall, and thresholds
precision, recall, thresholds = precision_recall_curve(
test_generator.labels, predictions
)
# Calculate F1-score
f1_scores = 2 * (precision * recall) / (precision + recall)
# Calculate area under the curve (AUC)
auc_score = auc(recall, precision)
# Plot the F1 curve
plt.plot(recall, precision, label="F1 curve (AUC = {:.2f})".format(auc_score))
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("F1 Curve")
plt.legend()
plt.show()
# Confusion matrix
cm = confusion_matrix(test_generator.labels, labels)
cm_percent = cm / cm.sum(axis=1).reshape(-1, 1) * 100
plt.figure(figsize=(8, 6))
sns.heatmap(
cm_percent,
annot=True,
cmap="Blues",
fmt=".1f",
xticklabels=class_names,
yticklabels=class_names,
)
plt.xlabel("Predicted Labels")
plt.ylabel("True Labels")
plt.title("Confusion Matrix")
plt.show()
# Select random samples from the test data
sample_indices = np.random.choice(len(test_generator), size=10, replace=False)
sample_images = []
sample_actual_labels = []
sample_predicted_labels = []
sample_probabilities = []
for i in sample_indices:
image, actual_labels = test_generator[i]
predicted_label = labels[i]
probability = predictions[i][0]
sample_images.append(image[0]) # Access the first image in the batch
sample_actual_labels.append(
actual_labels[0]
) # Access the actual label for the first image
sample_predicted_labels.append(predicted_label)
sample_probabilities.append(probability)
# Calculate the subplot layout based on the number of sample images
num_images = len(sample_images)
num_rows = int(np.ceil(num_images / 2))
num_cols = min(num_images, 2)
# Plot the sample images with labels and probabilities
plt.figure(figsize=(12, 6))
for i in range(len(sample_images)):
plt.subplot(num_rows, num_cols, i + 1)
plt.imshow(sample_images[i])
actual_label = "FAKE" if sample_actual_labels[i] == 0 else "REAL"
predicted_label = "FAKE" if sample_predicted_labels[i] == 0 else "REAL"
plt.title(
f"Actual: {actual_label}, Predicted: {predicted_label}\nProbability: {sample_probabilities[i]:.2f}"
)
plt.axis("off")
plt.tight_layout()
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/813/129813621.ipynb
|
cifake-real-and-ai-generated-synthetic-images
|
birdy654
|
[{"Id": 129813621, "ScriptId": 38450781, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11988597, "CreationDate": "05/16/2023 16:40:17", "VersionNumber": 3.0, "Title": "Fake vs. Real Image Classification using VGG16", "EvaluationDate": "05/16/2023", "IsChange": false, "TotalLines": 280.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 280.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186188715, "KernelVersionId": 129813621, "SourceDatasetVersionId": 5256696}]
|
[{"Id": 5256696, "DatasetId": 3041726, "DatasourceVersionId": 5329502, "CreatorUserId": 2039603, "LicenseName": "Other (specified in description)", "CreationDate": "03/28/2023 16:00:29", "VersionNumber": 3.0, "Title": "CIFAKE: Real and AI-Generated Synthetic Images", "Slug": "cifake-real-and-ai-generated-synthetic-images", "Subtitle": "Can Computer Vision detect when images have been generated by AI?", "Description": "# CIFAKE: Real and AI-Generated Synthetic Images\nThe quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.\n\nCIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?\n\nFurther information on this dataset can be found here: [Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)\n\n\n\n## Dataset details\nThe dataset contains two classes - REAL and FAKE. \n\nFor REAL, we collected the images from Krizhevsky & Hinton's [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)\n\nFor the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4\n\nThere are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)\n\n## Papers with Code\nThe dataset and all studies using it are linked using [Papers with Code](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)\n[https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images](https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images)\n\n\n## References\nIf you use this dataset, you **must** cite the following sources\n\n[Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdfl)\n\n[Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126)\n\nReal images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2023). The Bird & Lotfi study is a preprint currently available on [ArXiv](https://arxiv.org/abs/2303.14126) and this description will be updated when the paper is published.\n\n## Notes\n\nThe updates to the dataset on the 28th of March 2023 did not change anything; the file formats \".jpeg\" were renamed \".jpg\" and the root folder was uploaded to meet Kaggle's usability requirements.\n\n## License\nThis dataset is published under the [same MIT license as CIFAR-10](https://github.com/wichtounet/cifar-10/blob/master/LICENSE):\n\n*Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:*\n\n*The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.*\n\n*THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*", "VersionNotes": "Kaggle compatibility fix (no actual changes)", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3041726, "CreatorUserId": 2039603, "OwnerUserId": 2039603.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5256696.0, "CurrentDatasourceVersionId": 5329502.0, "ForumId": 3081274, "Type": 2, "CreationDate": "03/24/2023 13:22:42", "LastActivityDate": "03/24/2023", "TotalViews": 13728, "TotalDownloads": 1803, "TotalVotes": 46, "TotalKernels": 15}]
|
[{"Id": 2039603, "UserName": "birdy654", "DisplayName": "Jordan J. Bird", "RegisterDate": "07/03/2018", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import random
import shutil
# Set the paths to your dataset folders
dataset_dir = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/train"
real_dir = os.path.join(dataset_dir, "REAL")
fake_dir = os.path.join(dataset_dir, "FAKE")
# Set the paths to the new directories that will contain the selected images
train_dir = "/kaggle/working/train"
real_train_dir = os.path.join(train_dir, "REAL")
fake_train_dir = os.path.join(train_dir, "FAKE")
# Create the new directories if they don't exist
if not os.path.exists(real_train_dir):
os.makedirs(real_train_dir)
if not os.path.exists(fake_train_dir):
os.makedirs(fake_train_dir)
# Set the number of images to select from each folder
num_images = 2000
# Randomly select the required number of images from the REAL folder and copy them to the new directory
real_images = os.listdir(real_dir)
selected_real_images = random.sample(real_images, num_images)
for image_name in selected_real_images:
source_path = os.path.join(real_dir, image_name)
dest_path = os.path.join(real_train_dir, image_name)
shutil.copyfile(source_path, dest_path)
# Randomly select the required number of images from the FAKE folder and copy them to the new directory
fake_images = os.listdir(fake_dir)
selected_fake_images = random.sample(fake_images, num_images)
for image_name in selected_fake_images:
source_path = os.path.join(fake_dir, image_name)
dest_path = os.path.join(fake_train_dir, image_name)
shutil.copyfile(source_path, dest_path)
# Set the paths to your dataset folders
dataset_dir_test = "/kaggle/input/cifake-real-and-ai-generated-synthetic-images/test"
real_dir = os.path.join(dataset_dir_test, "REAL")
fake_dir = os.path.join(dataset_dir_test, "FAKE")
# Set the paths to the new directories that will contain the selected images
test_dir = "/kaggle/working/test"
real_test_dir = os.path.join(test_dir, "REAL")
fake_test_dir = os.path.join(test_dir, "FAKE")
# Create the new directories if they don't exist
if not os.path.exists(real_test_dir):
os.makedirs(real_test_dir)
if not os.path.exists(fake_test_dir):
os.makedirs(fake_test_dir)
# Set the number of images to select from each folder
num_images = 200
# Randomly select the required number of images from the REAL folder and copy them to the new directory
real_images = os.listdir(real_dir)
selected_real_images = random.sample(real_images, num_images)
for image_name in selected_real_images:
source_path = os.path.join(real_dir, image_name)
dest_path = os.path.join(real_test_dir, image_name)
shutil.copyfile(source_path, dest_path)
# Randomly select the required number of images from the FAKE folder and copy them to the new directory
fake_images = os.listdir(fake_dir)
selected_fake_images = random.sample(fake_images, num_images)
for image_name in selected_fake_images:
source_path = os.path.join(fake_dir, image_name)
dest_path = os.path.join(fake_test_dir, image_name)
shutil.copyfile(source_path, dest_path)
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
import os
import cv2
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import average_precision_score
import matplotlib.pyplot as plt
# Set the paths to the train and test directories
train_dir = "/kaggle/working/train"
test_dir = "/kaggle/working/test"
# Set up the model
base_model = VGG16(weights="imagenet", include_top=False, input_shape=(32, 32, 3))
for layer in base_model.layers:
layer.trainable = False
model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))
batch_size = 32
# Compile the model
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
# Perform data augmentation
train_datagen = ImageDataGenerator(
rescale=1.0 / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True
)
# Load the training data
train_generator = train_datagen.flow_from_directory(
train_dir, target_size=(32, 32), batch_size=batch_size, class_mode="binary"
)
# Train the model
history = model.fit(
train_generator, steps_per_epoch=train_generator.n // batch_size, epochs=50
)
# Load the test data
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(32, 32),
batch_size=batch_size,
class_mode="binary",
shuffle=False,
)
# Make predictions on the test data
predictions = model.predict(test_generator)
labels = [0 if pred < 0.5 else 1 for pred in predictions]
# Convert labels to 'FAKE' and 'REAL'
# labels = ['FAKE' if label == 0 else 'REAL' for label in labels]
# Calculate accuracy
accuracy = np.sum(np.array(test_generator.labels) == np.array(labels)) / len(labels)
# Print the accuracy
print("\nAccuracy:", accuracy)
cm = confusion_matrix(test_generator.labels, labels)
print("\nConfusion Matrix:")
print(cm)
# Compute the classification report
class_names = test_generator.class_indices.keys()
classification_rep = classification_report(
test_generator.labels, labels, target_names=class_names
)
print("\nClassification Report:")
print(classification_rep)
# Calculate the average precision (mAP)
mAP = average_precision_score(test_generator.labels, predictions)
print("\nMean Average Precision (mAP):", mAP)
import matplotlib.pyplot as plt
import seaborn as sns
# Confusion matrix
cm = confusion_matrix(test_generator.labels, labels)
plt.figure(figsize=(8, 6))
sns.heatmap(
cm,
annot=True,
cmap="Blues",
fmt="d",
xticklabels=class_names,
yticklabels=class_names,
)
plt.xlabel("Predicted Labels")
plt.ylabel("True Labels")
plt.title("Confusion Matrix")
plt.show()
# Loss plot
plt.figure(figsize=(8, 6))
plt.plot(history.history["loss"], label="Training Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Training Loss")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve
# Calculate precision and recall
precision, recall, _ = precision_recall_curve(test_generator.labels, predictions)
# Plot precision-recall curve
plt.plot(recall, precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Precision-Recall Curve")
plt.grid(True)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve, auc
# Calculate precision, recall, and thresholds
precision, recall, thresholds = precision_recall_curve(
test_generator.labels, predictions
)
# Calculate F1-score
f1_scores = 2 * (precision * recall) / (precision + recall)
# Calculate area under the curve (AUC)
auc_score = auc(recall, precision)
# Plot the F1 curve
plt.plot(recall, precision, label="F1 curve (AUC = {:.2f})".format(auc_score))
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("F1 Curve")
plt.legend()
plt.show()
# Confusion matrix
cm = confusion_matrix(test_generator.labels, labels)
cm_percent = cm / cm.sum(axis=1).reshape(-1, 1) * 100
plt.figure(figsize=(8, 6))
sns.heatmap(
cm_percent,
annot=True,
cmap="Blues",
fmt=".1f",
xticklabels=class_names,
yticklabels=class_names,
)
plt.xlabel("Predicted Labels")
plt.ylabel("True Labels")
plt.title("Confusion Matrix")
plt.show()
# Select random samples from the test data
sample_indices = np.random.choice(len(test_generator), size=10, replace=False)
sample_images = []
sample_actual_labels = []
sample_predicted_labels = []
sample_probabilities = []
for i in sample_indices:
image, actual_labels = test_generator[i]
predicted_label = labels[i]
probability = predictions[i][0]
sample_images.append(image[0]) # Access the first image in the batch
sample_actual_labels.append(
actual_labels[0]
) # Access the actual label for the first image
sample_predicted_labels.append(predicted_label)
sample_probabilities.append(probability)
# Calculate the subplot layout based on the number of sample images
num_images = len(sample_images)
num_rows = int(np.ceil(num_images / 2))
num_cols = min(num_images, 2)
# Plot the sample images with labels and probabilities
plt.figure(figsize=(12, 6))
for i in range(len(sample_images)):
plt.subplot(num_rows, num_cols, i + 1)
plt.imshow(sample_images[i])
actual_label = "FAKE" if sample_actual_labels[i] == 0 else "REAL"
predicted_label = "FAKE" if sample_predicted_labels[i] == 0 else "REAL"
plt.title(
f"Actual: {actual_label}, Predicted: {predicted_label}\nProbability: {sample_probabilities[i]:.2f}"
)
plt.axis("off")
plt.tight_layout()
plt.show()
| false | 0 | 2,906 | 0 | 3,949 | 2,906 |
||
129602055
|
<jupyter_start><jupyter_text>Air Passenger Data for Time Series Analysis
### Context
This data is used for making ARIMA model forecasting.
### Content
This contains the increasing rate of passenger
Kaggle dataset identifier: air-passenger-data-for-time-series-analysis
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Load Dataset
df = pd.read_csv(
"/kaggle/input/air-passenger-data-for-time-series-analysis/AirPassengers.csv"
)
df
# # Exploratory Data Analysis (EDA)
# ## View Dataset Description
df.info()
# ## Data Visualization
import matplotlib.pyplot as plt
import datetime
plt.figure(figsize=(12, 6))
plt.plot(df["Month"], df["#Passengers"])
plt.xlabel("Time")
# plt.xticks(rotation=45)
plt.ylabel("Num of Passengers")
plt.title("US Airline Num of Passengers Trend 1949 - 1960")
plt.show()
# There is a positive trend with some repetitive pattern
# # Time Series Decomposition
from statsmodels.tsa.seasonal import seasonal_decompose
from dateutil.parser import parse
# ## Additive Decomposition
additive_dec = seasonal_decompose(df["#Passengers"], model="additive", period=30)
plt.figure(figsize=(12, 8))
additive_dec.plot()
plt.suptitle("Additive Decomposition", fontsize=12)
plt.tight_layout()
plt.show()
multiplicative_dec = seasonal_decompose(
df["#Passengers"], model="multiplicative", period=30
)
plt.figure(figsize=(12, 8))
multiplicative_dec.plot()
plt.suptitle("Multiplicative Decomposition", fontsize=12)
plt.tight_layout()
plt.show()
# Residual in additive decomposition still have a pattern, while in multiplicative is not really showing and quite random. So we will preferred to use multiplicative decomposition.
# # Stationary Test for Time Series
from statsmodels.tsa.stattools import adfuller, kpss
from statsmodels.graphics.tsaplots import plot_acf
# ## Augmented Dickey Fuller Test (ADF Test)
# H0: time series data is non-stationary
# H1: time series data is stationary
# p-value reject null hypothesis (H0)
result = adfuller(df["#Passengers"].values, autolag="AIC")
print(f"ADF Statistic: {result[0]}")
print(f"p-value: {result[1]}")
# ## KPSS Test
# H0: time series data is stationary
# H1: time series data is non-stationary
# p-value reject null hypothesis (H0)
result = kpss(df["#Passengers"])
print("KPSS Statistic:", result[0])
print("p-value:", result[1])
# From two test result above, We can see that current data is non-stationary
# # Autocorrelation
# Measure how correlated time series data is at a given point in time with past values.
autocorr_lag1 = df["#Passengers"].autocorr(lag=1)
print("One Month Lag: ", autocorr_lag1)
autocorr_lag3 = df["#Passengers"].autocorr(lag=3)
print("Three Month Lag: ", autocorr_lag3)
autocorr_lag6 = df["#Passengers"].autocorr(lag=6)
print("Six Month Lag: ", autocorr_lag6)
autocorr_lag9 = df["#Passengers"].autocorr(lag=9)
print("Nine Month Lag: ", autocorr_lag9)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/602/129602055.ipynb
|
air-passenger-data-for-time-series-analysis
|
ashfakyeafi
|
[{"Id": 129602055, "ScriptId": 38534427, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6654637, "CreationDate": "05/15/2023 07:00:58", "VersionNumber": 1.0, "Title": "Airline Passenger Forecasting using ARIMA", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 113.0, "LinesInsertedFromPrevious": 113.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185836756, "KernelVersionId": 129602055, "SourceDatasetVersionId": 2504188}]
|
[{"Id": 2504188, "DatasetId": 1516462, "DatasourceVersionId": 2546888, "CreatorUserId": 5154008, "LicenseName": "CC0: Public Domain", "CreationDate": "08/06/2021 14:46:29", "VersionNumber": 1.0, "Title": "Air Passenger Data for Time Series Analysis", "Slug": "air-passenger-data-for-time-series-analysis", "Subtitle": "There is a list of passenger data from year 1949 to 1960", "Description": "### Context\n\nThis data is used for making ARIMA model forecasting.\n\n\n### Content\n\nThis contains the increasing rate of passenger\n\n\n### Acknowledgements\n\nWe wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.\n\n\n### Inspiration\n\nYour data will be in front of the world's largest data science community. What questions do you want to see answered?", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1516462, "CreatorUserId": 5154008, "OwnerUserId": 5154008.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2504188.0, "CurrentDatasourceVersionId": 2546888.0, "ForumId": 1536251, "Type": 2, "CreationDate": "08/06/2021 14:46:29", "LastActivityDate": "08/06/2021", "TotalViews": 11264, "TotalDownloads": 1480, "TotalVotes": 43, "TotalKernels": 9}]
|
[{"Id": 5154008, "UserName": "ashfakyeafi", "DisplayName": "Ashfak Yeafi", "RegisterDate": "05/24/2020", "PerformanceTier": 3}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Load Dataset
df = pd.read_csv(
"/kaggle/input/air-passenger-data-for-time-series-analysis/AirPassengers.csv"
)
df
# # Exploratory Data Analysis (EDA)
# ## View Dataset Description
df.info()
# ## Data Visualization
import matplotlib.pyplot as plt
import datetime
plt.figure(figsize=(12, 6))
plt.plot(df["Month"], df["#Passengers"])
plt.xlabel("Time")
# plt.xticks(rotation=45)
plt.ylabel("Num of Passengers")
plt.title("US Airline Num of Passengers Trend 1949 - 1960")
plt.show()
# There is a positive trend with some repetitive pattern
# # Time Series Decomposition
from statsmodels.tsa.seasonal import seasonal_decompose
from dateutil.parser import parse
# ## Additive Decomposition
additive_dec = seasonal_decompose(df["#Passengers"], model="additive", period=30)
plt.figure(figsize=(12, 8))
additive_dec.plot()
plt.suptitle("Additive Decomposition", fontsize=12)
plt.tight_layout()
plt.show()
multiplicative_dec = seasonal_decompose(
df["#Passengers"], model="multiplicative", period=30
)
plt.figure(figsize=(12, 8))
multiplicative_dec.plot()
plt.suptitle("Multiplicative Decomposition", fontsize=12)
plt.tight_layout()
plt.show()
# Residual in additive decomposition still have a pattern, while in multiplicative is not really showing and quite random. So we will preferred to use multiplicative decomposition.
# # Stationary Test for Time Series
from statsmodels.tsa.stattools import adfuller, kpss
from statsmodels.graphics.tsaplots import plot_acf
# ## Augmented Dickey Fuller Test (ADF Test)
# H0: time series data is non-stationary
# H1: time series data is stationary
# p-value reject null hypothesis (H0)
result = adfuller(df["#Passengers"].values, autolag="AIC")
print(f"ADF Statistic: {result[0]}")
print(f"p-value: {result[1]}")
# ## KPSS Test
# H0: time series data is stationary
# H1: time series data is non-stationary
# p-value reject null hypothesis (H0)
result = kpss(df["#Passengers"])
print("KPSS Statistic:", result[0])
print("p-value:", result[1])
# From two test result above, We can see that current data is non-stationary
# # Autocorrelation
# Measure how correlated time series data is at a given point in time with past values.
autocorr_lag1 = df["#Passengers"].autocorr(lag=1)
print("One Month Lag: ", autocorr_lag1)
autocorr_lag3 = df["#Passengers"].autocorr(lag=3)
print("Three Month Lag: ", autocorr_lag3)
autocorr_lag6 = df["#Passengers"].autocorr(lag=6)
print("Six Month Lag: ", autocorr_lag6)
autocorr_lag9 = df["#Passengers"].autocorr(lag=9)
print("Nine Month Lag: ", autocorr_lag9)
| false | 1 | 1,034 | 0 | 1,102 | 1,034 |
||
129602912
|
# **additon
# subtraction
# multiplication
# divison**
sales_A = 100
sales_B = 200
total_sales = sales_A + sales_B
diff_sales = sales_A - sales_B
print(total_sales)
print(diff_sales)
sales_per_unit = 40
no_of_units = 45
total = sales_per_unit * no_of_units
print(total)
yearly_sales = 28000
average_sale_per_month = 28000 / 12
print(average_sale_per_month)
# **modulus** **division** for finding remainder
a = 21
remainder = 21 % 2
print(remainder)
if remainder == 1:
print("odd")
# **exponentiation**
x = 2
power = 2
answer = x**2
print(answer)
# **floor division**
a = 50
b = 12
c = a // b
print(c)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/602/129602912.ipynb
| null | null |
[{"Id": 129602912, "ScriptId": 38503985, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12850343, "CreationDate": "05/15/2023 07:08:16", "VersionNumber": 1.0, "Title": "Arithmetic operations", "EvaluationDate": NaN, "IsChange": true, "TotalLines": 45.0, "LinesInsertedFromPrevious": 45.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# **additon
# subtraction
# multiplication
# divison**
sales_A = 100
sales_B = 200
total_sales = sales_A + sales_B
diff_sales = sales_A - sales_B
print(total_sales)
print(diff_sales)
sales_per_unit = 40
no_of_units = 45
total = sales_per_unit * no_of_units
print(total)
yearly_sales = 28000
average_sale_per_month = 28000 / 12
print(average_sale_per_month)
# **modulus** **division** for finding remainder
a = 21
remainder = 21 % 2
print(remainder)
if remainder == 1:
print("odd")
# **exponentiation**
x = 2
power = 2
answer = x**2
print(answer)
# **floor division**
a = 50
b = 12
c = a // b
print(c)
| false | 0 | 256 | 0 | 256 | 256 |
||
129602169
|
<jupyter_start><jupyter_text>Real estate prices in Tashkent, Uzbekistan
### Context
The dataset containt the prices for real estate in Tashkent, Uzbekistan. Data was scraped from uybor.uz, real-estate advertisement website. Data was scraped back in 2019.
### Content
The dataset contains following columns:
address - approximate address of the real-estate
district - the district the real-estate located in
rooms - number of rooms
size - total size of the unit in **square meters**
level - which level the unit located at
max_levels - maximum levels of the building
price - price in **USD**
lat - latitude
lng - longitude
Kaggle dataset identifier: tashkent-real-estate-2019
<jupyter_script>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
from matplotlib.gridspec import GridSpec
import seaborn as sns
sns.set()
from scipy import stats
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector, make_column_transformer
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score, train_test_split
import sklearn.metrics as metrics
style.use("fivethirtyeight")
pd.options.mode.chained_assignment = None # default='warn'
# we are using excel file that's why I import openpyxl
# # Loading Dataset
data = pd.read_excel("../input/tashkent-real-estate-2019/uybor.xlsx")
# # EDA + FE: Exploratory Data Analysis and Feature Engineering
data.head()
data.shape
data.nunique()
# ### We can see that this dataset doesn't have NaN values
data.info()
data.describe()
df = data.drop("address", axis=1, inplace=True)
# ### Price
# #### Price-Column has got outliers
sns.histplot(data.price)
plt.show()
# #### For a good result, i am looking a for normal distrubution, so trying to remove outliers
def remove_outliers(data, x):
std_dev = np.std(data[x])
mean = np.mean(data[x])
cut_off = std_dev * 1.5
lower, upper = mean - cut_off, mean + cut_off
data = data[(data[x] < upper) & (data[x] > lower)]
print(f"Outliers of {x} are removed\n")
return data
data = remove_outliers(data, "price")
sns.histplot(data.price)
plt.show()
q = data["price"].quantile(0.99)
df = data[data["price"] < q]
df.describe(include="all")
df = data.copy()
sns.histplot(df["price"])
plt.show()
fig = plt.figure(figsize=(16, 12))
grid = GridSpec(ncols=1, nrows=2, figure=fig)
# Histogram
ax1 = fig.add_subplot(grid[0, :])
sns.histplot(df["price"], ax=ax1, kde=True)
# QQ plot
ax2 = fig.add_subplot(grid[1, :])
stats.probplot(df["price"], plot=ax2)
df.shape
sns.distplot(df["size"])
q = df["size"].quantile(0.99)
df2 = df[df["size"] < q]
sns.distplot(df2["size"])
sns.displot(df["rooms"])
df3 = df[df.rooms < 7]
sns.displot(df3.rooms)
sns.displot(df["level"])
q = df3["level"].quantile(0.99)
df4 = df3[df3["level"] < q]
sns.displot(df4["level"])
sns.displot(df4["max_levels"])
q = df4["level"].quantile(0.99)
df5 = df4[df4["level"] < q]
sns.displot(df5["max_levels"])
data_cleaned = df5.reset_index(drop=True)
data_cleaned.head()
distdf = df.groupby("district").mean()
# Grafiklarni chizamiz
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
# Umumiy chizma nomini beramiz:
sns.histplot(ax=ax[0, 0], data=data_cleaned, x="price")
sns.histplot(ax=ax[0, 1], data=data_cleaned, x="size")
sns.scatterplot(
ax=ax[1, 0], data=data_cleaned, x=data_cleaned["size"], y=data_cleaned["price"]
)
sns.barplot(ax=ax[1, 1], x=distdf.index, y=distdf["price"])
# Har bir grafik uchun nom:
ax[0, 0].set_title("Uylarning narxi bo'yicha taqsimoti")
ax[0, 1].set_title("Uylarning maydoni bo'yicha taqsimoti")
ax[1, 0].set_title("Uylarning narxi va maydoni o'rtasioda bog'liqlik")
ax[1, 1].set_title("Tumanlar bo'yicha o'rtacha narxlar")
plt.xticks(rotation=90)
plt.show()
# **I will create interaction terms between different features to capture their combined effect on the target variable. For example, multiply size and level features to create an interaction term.**
data_cleaned["size_level"] = data_cleaned["level"] * data_cleaned["size"]
data_cleaned.describe(include="all")
print("Below the most important features relative to Price-target")
corr = data_cleaned.corr()
corr.sort_values(["price"], ascending=False, inplace=True)
print(corr.price)
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
variables = data_cleaned[["size", "level", "max_levels"]]
vif = pd.DataFrame()
vif["VIF"] = [
variance_inflation_factor(variables.values, i) for i in range(variables.shape[1])
]
vif["features"] = variables.columns
vif
# **Before I added rooms column and correlation between size and rooms was high when i was checking VIF, so Im gonna drop this column**
data_no_multicollinearity = data_cleaned.drop(["rooms"], axis=1)
data_with_dummies = pd.get_dummies(data_no_multicollinearity, drop_first=True)
data_with_dummies.head()
data_with_dummies.shape
# import necessary libraries
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LassoCV
from sklearn.metrics import mean_squared_error, mean_absolute_error
# define features and target
X = data_with_dummies.drop(["price"], axis=1)
y = data_with_dummies["price"]
# split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# define preprocessing pipeline
preprocessor = make_pipeline(StandardScaler())
# define Lasso regression model with cross-validation
model = LassoCV(cv=5)
# define pipeline with preprocessor and Lasso regression
pipe = make_pipeline(preprocessor, model)
# define parameter grid for hyperparameter tuning
param_grid = {
"lassocv__alphas": [[0.001, 0.01, 0.1, 1, 10]],
"lassocv__max_iter": [10000],
"lassocv__tol": [1e-4],
}
# perform grid search with cross-validation
grid_search = GridSearchCV(pipe, param_grid, cv=5)
grid_search.fit(X_train, y_train)
# evaluate model with best hyperparameters
model = grid_search.best_estimator_
train_score = model.score(X_train, y_train)
test_score = model.score(X_test, y_test)
y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_pred)
# print evaluation metrics and example predictions
print("MSE:", mse)
print("RMSE:", rmse)
print("MAE:", mae)
print("Score (train):", train_score)
print("Score (test):", test_score)
for i in range(5):
print(
"Real Value: ${}, Predicted Value: ${}".format(
y_test.values[i], round(y_pred[i])
)
)
def visualize_model_results(data, model):
fig = plt.figure(figsize=(17, 10))
data = data.sort_values(by=["price"])
X = data.drop("price", axis=1)
y = data.price.astype(int)
plt.scatter(range(X.shape[0]), y, color="red", label="Real")
plt.scatter(range(X.shape[0]), model.predict(X), marker=".", label="Predicted")
plt.legend(loc=2, prop={"size": 25})
plt.show()
visualize_model_results(data_with_dummies, model)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/602/129602169.ipynb
|
tashkent-real-estate-2019
|
anvarnarz
|
[{"Id": 129602169, "ScriptId": 38539008, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4558105, "CreationDate": "05/15/2023 07:01:50", "VersionNumber": 1.0, "Title": "Tashkent house price prediction", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 237.0, "LinesInsertedFromPrevious": 237.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 185836896, "KernelVersionId": 129602169, "SourceDatasetVersionId": 3005925}]
|
[{"Id": 3005925, "DatasetId": 1841368, "DatasourceVersionId": 3053775, "CreatorUserId": 2223916, "LicenseName": "CC0: Public Domain", "CreationDate": "01/05/2022 03:04:59", "VersionNumber": 1.0, "Title": "Real estate prices in Tashkent, Uzbekistan", "Slug": "tashkent-real-estate-2019", "Subtitle": "Data scraped from uybor.uz", "Description": "### Context\n\nThe dataset containt the prices for real estate in Tashkent, Uzbekistan. Data was scraped from uybor.uz, real-estate advertisement website. Data was scraped back in 2019.\n\n\n### Content\n\nThe dataset contains following columns:\naddress - approximate address of the real-estate\ndistrict - the district the real-estate located in\nrooms - number of rooms\nsize - total size of the unit in **square meters**\nlevel - which level the unit located at\nmax_levels - maximum levels of the building\nprice - price in **USD**\nlat - latitude\nlng - longitude", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1841368, "CreatorUserId": 2223916, "OwnerUserId": 2223916.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3005925.0, "CurrentDatasourceVersionId": 3053775.0, "ForumId": 1864221, "Type": 2, "CreationDate": "01/05/2022 03:04:59", "LastActivityDate": "01/05/2022", "TotalViews": 3662, "TotalDownloads": 394, "TotalVotes": 33, "TotalKernels": 27}]
|
[{"Id": 2223916, "UserName": "anvarnarz", "DisplayName": "Anvar", "RegisterDate": "09/08/2018", "PerformanceTier": 0}]
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
from matplotlib.gridspec import GridSpec
import seaborn as sns
sns.set()
from scipy import stats
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector, make_column_transformer
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score, train_test_split
import sklearn.metrics as metrics
style.use("fivethirtyeight")
pd.options.mode.chained_assignment = None # default='warn'
# we are using excel file that's why I import openpyxl
# # Loading Dataset
data = pd.read_excel("../input/tashkent-real-estate-2019/uybor.xlsx")
# # EDA + FE: Exploratory Data Analysis and Feature Engineering
data.head()
data.shape
data.nunique()
# ### We can see that this dataset doesn't have NaN values
data.info()
data.describe()
df = data.drop("address", axis=1, inplace=True)
# ### Price
# #### Price-Column has got outliers
sns.histplot(data.price)
plt.show()
# #### For a good result, i am looking a for normal distrubution, so trying to remove outliers
def remove_outliers(data, x):
std_dev = np.std(data[x])
mean = np.mean(data[x])
cut_off = std_dev * 1.5
lower, upper = mean - cut_off, mean + cut_off
data = data[(data[x] < upper) & (data[x] > lower)]
print(f"Outliers of {x} are removed\n")
return data
data = remove_outliers(data, "price")
sns.histplot(data.price)
plt.show()
q = data["price"].quantile(0.99)
df = data[data["price"] < q]
df.describe(include="all")
df = data.copy()
sns.histplot(df["price"])
plt.show()
fig = plt.figure(figsize=(16, 12))
grid = GridSpec(ncols=1, nrows=2, figure=fig)
# Histogram
ax1 = fig.add_subplot(grid[0, :])
sns.histplot(df["price"], ax=ax1, kde=True)
# QQ plot
ax2 = fig.add_subplot(grid[1, :])
stats.probplot(df["price"], plot=ax2)
df.shape
sns.distplot(df["size"])
q = df["size"].quantile(0.99)
df2 = df[df["size"] < q]
sns.distplot(df2["size"])
sns.displot(df["rooms"])
df3 = df[df.rooms < 7]
sns.displot(df3.rooms)
sns.displot(df["level"])
q = df3["level"].quantile(0.99)
df4 = df3[df3["level"] < q]
sns.displot(df4["level"])
sns.displot(df4["max_levels"])
q = df4["level"].quantile(0.99)
df5 = df4[df4["level"] < q]
sns.displot(df5["max_levels"])
data_cleaned = df5.reset_index(drop=True)
data_cleaned.head()
distdf = df.groupby("district").mean()
# Grafiklarni chizamiz
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
# Umumiy chizma nomini beramiz:
sns.histplot(ax=ax[0, 0], data=data_cleaned, x="price")
sns.histplot(ax=ax[0, 1], data=data_cleaned, x="size")
sns.scatterplot(
ax=ax[1, 0], data=data_cleaned, x=data_cleaned["size"], y=data_cleaned["price"]
)
sns.barplot(ax=ax[1, 1], x=distdf.index, y=distdf["price"])
# Har bir grafik uchun nom:
ax[0, 0].set_title("Uylarning narxi bo'yicha taqsimoti")
ax[0, 1].set_title("Uylarning maydoni bo'yicha taqsimoti")
ax[1, 0].set_title("Uylarning narxi va maydoni o'rtasioda bog'liqlik")
ax[1, 1].set_title("Tumanlar bo'yicha o'rtacha narxlar")
plt.xticks(rotation=90)
plt.show()
# **I will create interaction terms between different features to capture their combined effect on the target variable. For example, multiply size and level features to create an interaction term.**
data_cleaned["size_level"] = data_cleaned["level"] * data_cleaned["size"]
data_cleaned.describe(include="all")
print("Below the most important features relative to Price-target")
corr = data_cleaned.corr()
corr.sort_values(["price"], ascending=False, inplace=True)
print(corr.price)
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
variables = data_cleaned[["size", "level", "max_levels"]]
vif = pd.DataFrame()
vif["VIF"] = [
variance_inflation_factor(variables.values, i) for i in range(variables.shape[1])
]
vif["features"] = variables.columns
vif
# **Before I added rooms column and correlation between size and rooms was high when i was checking VIF, so Im gonna drop this column**
data_no_multicollinearity = data_cleaned.drop(["rooms"], axis=1)
data_with_dummies = pd.get_dummies(data_no_multicollinearity, drop_first=True)
data_with_dummies.head()
data_with_dummies.shape
# import necessary libraries
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LassoCV
from sklearn.metrics import mean_squared_error, mean_absolute_error
# define features and target
X = data_with_dummies.drop(["price"], axis=1)
y = data_with_dummies["price"]
# split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# define preprocessing pipeline
preprocessor = make_pipeline(StandardScaler())
# define Lasso regression model with cross-validation
model = LassoCV(cv=5)
# define pipeline with preprocessor and Lasso regression
pipe = make_pipeline(preprocessor, model)
# define parameter grid for hyperparameter tuning
param_grid = {
"lassocv__alphas": [[0.001, 0.01, 0.1, 1, 10]],
"lassocv__max_iter": [10000],
"lassocv__tol": [1e-4],
}
# perform grid search with cross-validation
grid_search = GridSearchCV(pipe, param_grid, cv=5)
grid_search.fit(X_train, y_train)
# evaluate model with best hyperparameters
model = grid_search.best_estimator_
train_score = model.score(X_train, y_train)
test_score = model.score(X_test, y_test)
y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_pred)
# print evaluation metrics and example predictions
print("MSE:", mse)
print("RMSE:", rmse)
print("MAE:", mae)
print("Score (train):", train_score)
print("Score (test):", test_score)
for i in range(5):
print(
"Real Value: ${}, Predicted Value: ${}".format(
y_test.values[i], round(y_pred[i])
)
)
def visualize_model_results(data, model):
fig = plt.figure(figsize=(17, 10))
data = data.sort_values(by=["price"])
X = data.drop("price", axis=1)
y = data.price.astype(int)
plt.scatter(range(X.shape[0]), y, color="red", label="Real")
plt.scatter(range(X.shape[0]), model.predict(X), marker=".", label="Predicted")
plt.legend(loc=2, prop={"size": 25})
plt.show()
visualize_model_results(data_with_dummies, model)
| false | 0 | 2,217 | 3 | 2,407 | 2,217 |
||
129602235
|
<jupyter_start><jupyter_text>Unemployment in India
### Context
The story behind this datasets is how lock-down affects employment opportunities and how the unemployment rate increases during the Covid-19.
### Content
This dataset contains the unemployment rate of all the states in India
Region = states in India
Date = date which the unemployment rate observed
Frequency = measuring frequency (Monthly)
Estimated Unemployment Rate (%) = percentage of people unemployed in each States of India
Estimated Employed = percentage of people employed
Estimated Labour Participation Rate (%) = labour force participation rate by dividing the number of people actively participating in the labour force by the
total number of people eligible to participate in the labor force
force
Kaggle dataset identifier: unemployment-in-india
<jupyter_script>import pandas as pd
import numpy as np
import seaborn as sns
import pandas as pd
# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
df = pd.read_csv(
"/kaggle/input/sms-spam-collection-dataset/spam.csv", encoding="ISO-8859-1"
)
df.head()
df["v2"].value_counts()
df["v1"].value_counts()
# separate x and y
x = df.v2.values
y = df.v1.values
# split train and test
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.25)
# Data preprocessing
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
X_train = cv.fit_transform(xtrain)
X_train.toarray()
# ML ALgorithm
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB()
model.fit(X_train, ytrain)
x_test = cv.transform(xtest)
x_test.toarray()
model.score(x_test, ytest)
email = [
"get an iphone 14 for free",
"use this product to be fair within 7 days, otherwise money return",
"give your account number of bank ,to get 1000000 dollar free",
"i am looking for english language tutorials",
]
cv_email = cv.transform(email)
model.predict(cv_email)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/602/129602235.ipynb
|
unemployment-in-india
|
gokulrajkmv
|
[{"Id": 129602235, "ScriptId": 38501424, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12132064, "CreationDate": "05/15/2023 07:02:24", "VersionNumber": 1.0, "Title": "Spam Email", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 69.0, "LinesInsertedFromPrevious": 69.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 185836979, "KernelVersionId": 129602235, "SourceDatasetVersionId": 1621146}, {"Id": 185836978, "KernelVersionId": 129602235, "SourceDatasetVersionId": 982}]
|
[{"Id": 1621146, "DatasetId": 752131, "DatasourceVersionId": 1656837, "CreatorUserId": 5012903, "LicenseName": "Other (specified in description)", "CreationDate": "11/05/2020 12:41:41", "VersionNumber": 5.0, "Title": "Unemployment in India", "Slug": "unemployment-in-india", "Subtitle": "during this darker times, we need to understand unemployment rate", "Description": "### Context\n\nThe story behind this datasets is how lock-down affects employment opportunities and how the unemployment rate increases during the Covid-19.\n\n### Content\n\nThis dataset contains the unemployment rate of all the states in India\n\nRegion = states in India\nDate = \t date which the unemployment rate observed \nFrequency = measuring frequency (Monthly)\t \nEstimated Unemployment Rate (%) = percentage of people unemployed in each States of India\nEstimated Employed\t = percentage of people employed\nEstimated Labour Participation Rate (%) =\t labour force participation rate by dividing the number of people actively participating in the labour force by the \n total number of people eligible to participate in the labor force\n force\n\n\n### Acknowledgements\n\nI wouldn't be here without the help of my friends. I owe you thanks !!\n\n\n### Inspiration\n\nquestions?\n1. How Covid-19 affects the employment\n2. how far the unemployment rate will go\n\nsource of datasets\nhttps://unemploymentinindia.cmie.com/", "VersionNotes": "location update", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 752131, "CreatorUserId": 5012903, "OwnerUserId": 5012903.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1621146.0, "CurrentDatasourceVersionId": 1656837.0, "ForumId": 767051, "Type": 2, "CreationDate": "07/02/2020 10:58:52", "LastActivityDate": "07/02/2020", "TotalViews": 40084, "TotalDownloads": 8967, "TotalVotes": 52, "TotalKernels": 27}]
|
[{"Id": 5012903, "UserName": "gokulrajkmv", "DisplayName": "Gokul raj K.", "RegisterDate": "05/03/2020", "PerformanceTier": 2}]
|
import pandas as pd
import numpy as np
import seaborn as sns
import pandas as pd
# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
df = pd.read_csv(
"/kaggle/input/sms-spam-collection-dataset/spam.csv", encoding="ISO-8859-1"
)
df.head()
df["v2"].value_counts()
df["v1"].value_counts()
# separate x and y
x = df.v2.values
y = df.v1.values
# split train and test
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.25)
# Data preprocessing
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
X_train = cv.fit_transform(xtrain)
X_train.toarray()
# ML ALgorithm
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB()
model.fit(X_train, ytrain)
x_test = cv.transform(xtest)
x_test.toarray()
model.score(x_test, ytest)
email = [
"get an iphone 14 for free",
"use this product to be fair within 7 days, otherwise money return",
"give your account number of bank ,to get 1000000 dollar free",
"i am looking for english language tutorials",
]
cv_email = cv.transform(email)
model.predict(cv_email)
| false | 1 | 394 | 3 | 607 | 394 |
||
129675358
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
def dot_product(a, b):
c = [[0] * a.shape[0] for _ in range(b.shape[1])]
if a.shape[0] == b.shape[1]:
for i in range(a.shape[0]):
for j in range(b.shape[1]):
c[i][j] = np.sum(a[i, :] * b[:, j])
print("Dot product of a and b is:\n", c)
else:
print(
"No. of columns of first vector should match with \
No. of rows of the second vector"
)
sample1 = [1, 2, 3, 4, 5]
sample2 = [2, 1, 1, 1, 1]
# dot_product(sample1, sample2)
a = np.array([[1, 2, 3], [5, 6, 4]])
b = np.array([[3, 4], [5, 7], [4, 8]])
c = np.array([[1, 2], [5, 6]])
b.shape
dot_product(b, a)
b.dot(a)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/675/129675358.ipynb
| null | null |
[{"Id": 129675358, "ScriptId": 38555066, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14335370, "CreationDate": "05/15/2023 16:48:05", "VersionNumber": 1.0, "Title": "dotProduct", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 52.0, "LinesInsertedFromPrevious": 52.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
def dot_product(a, b):
c = [[0] * a.shape[0] for _ in range(b.shape[1])]
if a.shape[0] == b.shape[1]:
for i in range(a.shape[0]):
for j in range(b.shape[1]):
c[i][j] = np.sum(a[i, :] * b[:, j])
print("Dot product of a and b is:\n", c)
else:
print(
"No. of columns of first vector should match with \
No. of rows of the second vector"
)
sample1 = [1, 2, 3, 4, 5]
sample2 = [2, 1, 1, 1, 1]
# dot_product(sample1, sample2)
a = np.array([[1, 2, 3], [5, 6, 4]])
b = np.array([[3, 4], [5, 7], [4, 8]])
c = np.array([[1, 2], [5, 6]])
b.shape
dot_product(b, a)
b.dot(a)
| false | 0 | 459 | 0 | 459 | 459 |
||
129232572
|
# 
from tensorflow.keras.layers import LayerNormalization, Layer, Dense, ReLU, Dropout
# **Import For Multi-head Attention Layer**
from tensorflow import matmul, math, reshape, shape, transpose, cast, float32
from tensorflow.keras.layers import Dense, Layer
from keras.backend import softmax
from numpy import random
# ****Implement the the scaled-dot product attention****
class DotProductAttention(Layer):
def __init__(self, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
def call(self, queries, keys, values, d_k, mask=None):
scores = matmul(queries, keys, transpose_b=True) / math.sqrt(cast(d_k, float32))
if mask is not None:
scores += -1e9 * mask
weights = softmax(scores)
return matmul(weights, values)
# **Implementing Multi-head attention**
class MultiHeadAttention(Layer):
def __init__(self, h, d_k, d_v, d_model, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.attention = DotProductAttention()
self.heads = h
self.d_k = d_k
self.d_v = d_v
self.d_model = d_model
self.W_q = Dense(d_k)
self.W_k = Dense(d_k)
self.W_v = Dense(d_v)
self.W_o = Dense(d_model)
def reshape_tensor(self, x, heads, flag):
if flag:
# (batch_size, heads, seq_lenght, -1)
x = reshape(x, shape=(shape(x)[0], shape(x)[1], heads, -1))
x = transpose(x, perm=(0, 2, 1, 3))
else:
# Reverting the reshaping and transposing operations: (batch_size, seq_length, d_k)
x = transpose(x, perm=(0, 2, 1, 3))
x = reshape(x, shape=(shape(x)[0], shape(x)[1], self.d_k))
return x
def call(self, queries, keys, values, mask=None):
q_reshaped = self.reshape_tensor(self.W_q(queries), self.heads, True)
k_reshaped = self.reshape_tensor(self.W_k(keys), self.heads, True)
v_reshaped = self.reshape_tensor(self.W_v(values), self.heads, True)
o_reshaped = self.attention(q_reshaped, k_reshaped, v_reshaped, self.d_k, mask)
output = self.reshape_tensor(o_reshaped, self.heads, False)
return self.W_o(output)
# ## **Positional Embedding Fixed Layers**
import tensorflow as tf
from tensorflow import convert_to_tensor, string
from tensorflow.keras.layers import TextVectorization, Embedding, Layer
from tensorflow.data import Dataset
import numpy as np
import matplotlib.pyplot as plt
class PositionEmbeddingFixedWeights(Layer):
def __init__(self, sequence_length, vocab_size, output_dim, **kwargs):
super(PositionEmbeddingFixedWeights, self).__init__(**kwargs)
word_embedding_matrix = self.get_position_encoding(vocab_size, output_dim)
position_embedding_matrix = self.get_position_encoding(
sequence_length, output_dim
)
self.word_embedding_layer = Embedding(
input_dim=vocab_size,
output_dim=output_dim,
weights=[word_embedding_matrix],
trainable=False,
)
self.position_embedding_layer = Embedding(
input_dim=sequence_length,
output_dim=output_dim,
weights=[position_embedding_matrix],
trainable=False,
)
def get_position_encoding(self, seq_len, d, n=10000):
P = np.zeros((seq_len, d))
for k in range(seq_len):
for i in np.arange(int(d / 2)):
denominator = np.power(n, 2 * i / d)
P[k, 2 * i] = np.sin(k / denominator)
P[k, 2 * i + 1] = np.cos(k / denominator)
return P
def call(self, inputs):
position_indices = tf.range(tf.shape(inputs)[-1])
embedded_indices = self.position_embedding_layer(position_indices)
embedded_words = self.word_embedding_layer(inputs)
return embedded_words + embedded_indices
# ## **Transformer Encoder**
from tensorflow.keras.layers import LayerNormalization, Layer, Dense, ReLU, Dropout
# **Implementing the Add and Norm Layer**
class AddNormalization(Layer):
def __init__(self, **kwargs):
super(AddNormalization, self).__init__(**kwargs)
self.layer_norm = LayerNormalization() # Layer Normalization Layer
def call(self, x, sublayer_x):
# The sublayer input and output need to be of the same shape to be summed
add = x + sublayer_x
# Apply layer normalization to the sum
return self.layer_norm(add)
# **Implementing the Feed-Forward Layer**
class FeedForward(Layer):
def __init__(self, d_ff, d_model, **kwargs):
super(FeedForward, self).__init__(**kwargs)
self.fully_connected1 = Dense(
d_ff
) # First fully connected layer shape(batch_size, seq_length, d_ff)
self.activation = (
ReLU()
) # ReLU activation layer shape(batch_size, seq_length, d_ff)
self.fully_connected2 = Dense(
d_model
) # Second fully connected layer shape(batch_size, seq_length, d_model)
def call(self, x):
# The input is passed into the two fully-connected layers, with a ReLU in between
x_fc1 = self.fully_connected1(x) #
return self.fully_connected2(self.activation(x_fc1))
# **Implementing the Encoder Layer**
class EncoderLayer(Layer):
def __init__(self, h, d_k, d_v, d_model, d_ff, rate, **kwargs):
super(EncoderLayer, self).__init__(**kwargs)
self.multihead_attention = MultiHeadAttention(h, d_k, d_v, d_model)
self.dropout1 = Dropout(rate)
self.add_norm1 = AddNormalization()
self.feed_forward = FeedForward(d_ff, d_model)
self.dropout2 = Dropout(rate)
self.add_norm2 = AddNormalization()
def call(self, x, padding_mask, training):
# Multi-head attention layer
multihead_output = self.multihead_attention(x, x, x, padding_mask)
# Expected output shape = (batch_size, sequence_length, d_model)
# Add in a dropout layer
multihead_output = self.dropout1(multihead_output, training=training)
# Followed by an add & norm layer
addnorm_output = self.add_norm1(x, multihead_output)
# Expected output shape = (batch_size, sequence_length, d_model)
# Followed by a fully connected layer
feedforward_output = self.feed_forward(addnorm_output)
# Expected output shape = (batch_size, sequence_length, d_model)
# Add in another dropout layer
feedforward_output = self.dropout2(feedforward_output, training=training)
# Expected output shape = (batch_size, sequence_length, d_model)
# Followed by another Add & Norm Layer
return self.add_norm2(addnorm_output, feedforward_output)
# Expected output shape = (batch_size, sequence_length, d_model)
# **Implementing the Encoder**
class Encoder(Layer):
def __init__(
self, vocab_size, sequence_length, h, d_k, d_v, d_model, d_ff, n, rate, **kwargs
):
super(Encoder, self).__init__(**kwargs)
self.pos_encoding = PositionEmbeddingFixedWeights(
sequence_length, vocab_size, d_model
)
self.dropout = Dropout(rate)
self.encoder_layer = [
EncoderLayer(h, d_k, d_v, d_model, d_ff, rate) for _ in range(n)
]
def call(self, input_sentence, padding_mask, training):
# Generate the positional encoding
pos_encoding_output = self.pos_encoding(input_sentence)
# Expected output shape = (batch_size, sequence_length, d_model)
# Add in a dropout layer
x = self.dropout(pos_encoding_output, training=training)
# Pass on the positional encoded values to each encoder layer
for i, layer in enumerate(self.encoder_layer):
x = layer(
x, padding_mask, training
) # this the arguments of call() function of EncoderLayer class
return x
# ## **Testing out the code**
h = 8 # Number of self-attention heads
d_k = 64 # Dimentionality of the linearly projected queries and keys
d_v = 64 # Dimentionality of the linearly projected values
d_ff = 2048 # Dimentionality of the inner fully connected layer
d_model = 512 # Dimentionality of the model sub-layers' outputs
n = 6 # Number of layers in the encoder stack
batch_size = 64 # Batch size from the training process
dropout_rate = 0.1 # Frequency of dropping the input units in the dropout layers
enc_vocab_size = 20 # vocabulary size for the encoder
input_seq_length = 5 # Maximum length of the input sequence
input_seq = random.random((batch_size, input_seq_length))
len(input_seq[0])
encoder = Encoder(
enc_vocab_size, input_seq_length, h, d_k, d_v, d_model, d_ff, n, dropout_rate
)
print(encoder(input_seq, None, True))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/232/129232572.ipynb
| null | null |
[{"Id": 129232572, "ScriptId": 38144128, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6877706, "CreationDate": "05/12/2023 03:47:13", "VersionNumber": 1.0, "Title": "Transformer_encoder", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 211.0, "LinesInsertedFromPrevious": 211.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# 
from tensorflow.keras.layers import LayerNormalization, Layer, Dense, ReLU, Dropout
# **Import For Multi-head Attention Layer**
from tensorflow import matmul, math, reshape, shape, transpose, cast, float32
from tensorflow.keras.layers import Dense, Layer
from keras.backend import softmax
from numpy import random
# ****Implement the the scaled-dot product attention****
class DotProductAttention(Layer):
def __init__(self, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
def call(self, queries, keys, values, d_k, mask=None):
scores = matmul(queries, keys, transpose_b=True) / math.sqrt(cast(d_k, float32))
if mask is not None:
scores += -1e9 * mask
weights = softmax(scores)
return matmul(weights, values)
# **Implementing Multi-head attention**
class MultiHeadAttention(Layer):
def __init__(self, h, d_k, d_v, d_model, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.attention = DotProductAttention()
self.heads = h
self.d_k = d_k
self.d_v = d_v
self.d_model = d_model
self.W_q = Dense(d_k)
self.W_k = Dense(d_k)
self.W_v = Dense(d_v)
self.W_o = Dense(d_model)
def reshape_tensor(self, x, heads, flag):
if flag:
# (batch_size, heads, seq_lenght, -1)
x = reshape(x, shape=(shape(x)[0], shape(x)[1], heads, -1))
x = transpose(x, perm=(0, 2, 1, 3))
else:
# Reverting the reshaping and transposing operations: (batch_size, seq_length, d_k)
x = transpose(x, perm=(0, 2, 1, 3))
x = reshape(x, shape=(shape(x)[0], shape(x)[1], self.d_k))
return x
def call(self, queries, keys, values, mask=None):
q_reshaped = self.reshape_tensor(self.W_q(queries), self.heads, True)
k_reshaped = self.reshape_tensor(self.W_k(keys), self.heads, True)
v_reshaped = self.reshape_tensor(self.W_v(values), self.heads, True)
o_reshaped = self.attention(q_reshaped, k_reshaped, v_reshaped, self.d_k, mask)
output = self.reshape_tensor(o_reshaped, self.heads, False)
return self.W_o(output)
# ## **Positional Embedding Fixed Layers**
import tensorflow as tf
from tensorflow import convert_to_tensor, string
from tensorflow.keras.layers import TextVectorization, Embedding, Layer
from tensorflow.data import Dataset
import numpy as np
import matplotlib.pyplot as plt
class PositionEmbeddingFixedWeights(Layer):
def __init__(self, sequence_length, vocab_size, output_dim, **kwargs):
super(PositionEmbeddingFixedWeights, self).__init__(**kwargs)
word_embedding_matrix = self.get_position_encoding(vocab_size, output_dim)
position_embedding_matrix = self.get_position_encoding(
sequence_length, output_dim
)
self.word_embedding_layer = Embedding(
input_dim=vocab_size,
output_dim=output_dim,
weights=[word_embedding_matrix],
trainable=False,
)
self.position_embedding_layer = Embedding(
input_dim=sequence_length,
output_dim=output_dim,
weights=[position_embedding_matrix],
trainable=False,
)
def get_position_encoding(self, seq_len, d, n=10000):
P = np.zeros((seq_len, d))
for k in range(seq_len):
for i in np.arange(int(d / 2)):
denominator = np.power(n, 2 * i / d)
P[k, 2 * i] = np.sin(k / denominator)
P[k, 2 * i + 1] = np.cos(k / denominator)
return P
def call(self, inputs):
position_indices = tf.range(tf.shape(inputs)[-1])
embedded_indices = self.position_embedding_layer(position_indices)
embedded_words = self.word_embedding_layer(inputs)
return embedded_words + embedded_indices
# ## **Transformer Encoder**
from tensorflow.keras.layers import LayerNormalization, Layer, Dense, ReLU, Dropout
# **Implementing the Add and Norm Layer**
class AddNormalization(Layer):
def __init__(self, **kwargs):
super(AddNormalization, self).__init__(**kwargs)
self.layer_norm = LayerNormalization() # Layer Normalization Layer
def call(self, x, sublayer_x):
# The sublayer input and output need to be of the same shape to be summed
add = x + sublayer_x
# Apply layer normalization to the sum
return self.layer_norm(add)
# **Implementing the Feed-Forward Layer**
class FeedForward(Layer):
def __init__(self, d_ff, d_model, **kwargs):
super(FeedForward, self).__init__(**kwargs)
self.fully_connected1 = Dense(
d_ff
) # First fully connected layer shape(batch_size, seq_length, d_ff)
self.activation = (
ReLU()
) # ReLU activation layer shape(batch_size, seq_length, d_ff)
self.fully_connected2 = Dense(
d_model
) # Second fully connected layer shape(batch_size, seq_length, d_model)
def call(self, x):
# The input is passed into the two fully-connected layers, with a ReLU in between
x_fc1 = self.fully_connected1(x) #
return self.fully_connected2(self.activation(x_fc1))
# **Implementing the Encoder Layer**
class EncoderLayer(Layer):
def __init__(self, h, d_k, d_v, d_model, d_ff, rate, **kwargs):
super(EncoderLayer, self).__init__(**kwargs)
self.multihead_attention = MultiHeadAttention(h, d_k, d_v, d_model)
self.dropout1 = Dropout(rate)
self.add_norm1 = AddNormalization()
self.feed_forward = FeedForward(d_ff, d_model)
self.dropout2 = Dropout(rate)
self.add_norm2 = AddNormalization()
def call(self, x, padding_mask, training):
# Multi-head attention layer
multihead_output = self.multihead_attention(x, x, x, padding_mask)
# Expected output shape = (batch_size, sequence_length, d_model)
# Add in a dropout layer
multihead_output = self.dropout1(multihead_output, training=training)
# Followed by an add & norm layer
addnorm_output = self.add_norm1(x, multihead_output)
# Expected output shape = (batch_size, sequence_length, d_model)
# Followed by a fully connected layer
feedforward_output = self.feed_forward(addnorm_output)
# Expected output shape = (batch_size, sequence_length, d_model)
# Add in another dropout layer
feedforward_output = self.dropout2(feedforward_output, training=training)
# Expected output shape = (batch_size, sequence_length, d_model)
# Followed by another Add & Norm Layer
return self.add_norm2(addnorm_output, feedforward_output)
# Expected output shape = (batch_size, sequence_length, d_model)
# **Implementing the Encoder**
class Encoder(Layer):
def __init__(
self, vocab_size, sequence_length, h, d_k, d_v, d_model, d_ff, n, rate, **kwargs
):
super(Encoder, self).__init__(**kwargs)
self.pos_encoding = PositionEmbeddingFixedWeights(
sequence_length, vocab_size, d_model
)
self.dropout = Dropout(rate)
self.encoder_layer = [
EncoderLayer(h, d_k, d_v, d_model, d_ff, rate) for _ in range(n)
]
def call(self, input_sentence, padding_mask, training):
# Generate the positional encoding
pos_encoding_output = self.pos_encoding(input_sentence)
# Expected output shape = (batch_size, sequence_length, d_model)
# Add in a dropout layer
x = self.dropout(pos_encoding_output, training=training)
# Pass on the positional encoded values to each encoder layer
for i, layer in enumerate(self.encoder_layer):
x = layer(
x, padding_mask, training
) # this the arguments of call() function of EncoderLayer class
return x
# ## **Testing out the code**
h = 8 # Number of self-attention heads
d_k = 64 # Dimentionality of the linearly projected queries and keys
d_v = 64 # Dimentionality of the linearly projected values
d_ff = 2048 # Dimentionality of the inner fully connected layer
d_model = 512 # Dimentionality of the model sub-layers' outputs
n = 6 # Number of layers in the encoder stack
batch_size = 64 # Batch size from the training process
dropout_rate = 0.1 # Frequency of dropping the input units in the dropout layers
enc_vocab_size = 20 # vocabulary size for the encoder
input_seq_length = 5 # Maximum length of the input sequence
input_seq = random.random((batch_size, input_seq_length))
len(input_seq[0])
encoder = Encoder(
enc_vocab_size, input_seq_length, h, d_k, d_v, d_model, d_ff, n, dropout_rate
)
print(encoder(input_seq, None, True))
| false | 0 | 2,509 | 0 | 2,509 | 2,509 |
||
129232687
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import transformers
import tensorflow as tf
from transformers import TFAutoModel, AutoTokenizer
from datasets import Dataset, DatasetDict
from sklearn.model_selection import train_test_split, KFold
from tensorflow.keras.callbacks import EarlyStopping, Callback
from tensorflow.keras.optimizers.schedules import ExponentialDecay
from kerastuner.tuners import RandomSearch
from kerastuner.engine.hyperparameters import HyperParameters
train_df = pd.read_csv("/kaggle/input/nlp-disaster-tweets/train.csv")
test_df = pd.read_csv("/kaggle/input/nlp-disaster-tweets/test.csv")
X_train = train_df.drop(columns=["keyword", "location", "target"])
y_train = train_df["target"]
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, random_state=42
)
import tensorflow as tf
import transformers
# Load the pre-trained BERT tokenizer
tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-uncased")
# Define the model architecture
model = transformers.TFBertForSequenceClassification.from_pretrained(
"bert-base-uncased"
)
# Tokenize the input text
train_encodings = tokenizer(X_train["text"].tolist(), truncation=True, padding=True)
val_encodings = tokenizer(X_val["text"].tolist(), truncation=True, padding=True)
# Create TensorFlow datasets from the tokenized encodings and labels
train_dataset = (
tf.data.Dataset.from_tensor_slices((dict(train_encodings), y_train.values))
.shuffle(len(X_train))
.batch(32)
)
val_dataset = tf.data.Dataset.from_tensor_slices(
(dict(val_encodings), y_val.values)
).batch(32)
# Train the model
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy")
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.fit(train_dataset, validation_data=val_dataset, epochs=3)
# Tokenize the test input text
test_encodings = tokenizer(test_df["text"].tolist(), truncation=True, padding=True)
# Create a TensorFlow dataset from the tokenized encodings
test_dataset = tf.data.Dataset.from_tensor_slices(dict(test_encodings)).batch(32)
# Use the trained model to predict the target labels for the test data
predictions = model.predict(test_dataset)
# Convert the predicted probabilities to predicted labels
predicted_labels = tf.argmax(predictions.logits, axis=1)
# Print the predicted labels
print(predicted_labels)
# Predict on the validation set
y_pred = model.predict(val_dataset)
# Get the predicted labels
y_pred_labels = np.argmax(y_pred.logits, axis=1)
# Get the true labels
y_true = y_val.values
# Compute the evaluation metrics
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred_labels))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/232/129232687.ipynb
| null | null |
[{"Id": 129232687, "ScriptId": 38414468, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13908764, "CreationDate": "05/12/2023 03:48:56", "VersionNumber": 1.0, "Title": "nlp-disaster-tweets1", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 102.0, "LinesInsertedFromPrevious": 102.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import transformers
import tensorflow as tf
from transformers import TFAutoModel, AutoTokenizer
from datasets import Dataset, DatasetDict
from sklearn.model_selection import train_test_split, KFold
from tensorflow.keras.callbacks import EarlyStopping, Callback
from tensorflow.keras.optimizers.schedules import ExponentialDecay
from kerastuner.tuners import RandomSearch
from kerastuner.engine.hyperparameters import HyperParameters
train_df = pd.read_csv("/kaggle/input/nlp-disaster-tweets/train.csv")
test_df = pd.read_csv("/kaggle/input/nlp-disaster-tweets/test.csv")
X_train = train_df.drop(columns=["keyword", "location", "target"])
y_train = train_df["target"]
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, random_state=42
)
import tensorflow as tf
import transformers
# Load the pre-trained BERT tokenizer
tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-uncased")
# Define the model architecture
model = transformers.TFBertForSequenceClassification.from_pretrained(
"bert-base-uncased"
)
# Tokenize the input text
train_encodings = tokenizer(X_train["text"].tolist(), truncation=True, padding=True)
val_encodings = tokenizer(X_val["text"].tolist(), truncation=True, padding=True)
# Create TensorFlow datasets from the tokenized encodings and labels
train_dataset = (
tf.data.Dataset.from_tensor_slices((dict(train_encodings), y_train.values))
.shuffle(len(X_train))
.batch(32)
)
val_dataset = tf.data.Dataset.from_tensor_slices(
(dict(val_encodings), y_val.values)
).batch(32)
# Train the model
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy")
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.fit(train_dataset, validation_data=val_dataset, epochs=3)
# Tokenize the test input text
test_encodings = tokenizer(test_df["text"].tolist(), truncation=True, padding=True)
# Create a TensorFlow dataset from the tokenized encodings
test_dataset = tf.data.Dataset.from_tensor_slices(dict(test_encodings)).batch(32)
# Use the trained model to predict the target labels for the test data
predictions = model.predict(test_dataset)
# Convert the predicted probabilities to predicted labels
predicted_labels = tf.argmax(predictions.logits, axis=1)
# Print the predicted labels
print(predicted_labels)
# Predict on the validation set
y_pred = model.predict(val_dataset)
# Get the predicted labels
y_pred_labels = np.argmax(y_pred.logits, axis=1)
# Get the true labels
y_true = y_val.values
# Compute the evaluation metrics
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred_labels))
| false | 0 | 1,007 | 0 | 1,007 | 1,007 |
||
129232996
|
# Running on GPU:
# Helper function, used these for debugging purposes
# detector2 build only succeeds if CUDA version is correct
#!nvidia-smi
#!nvcc --version
# import torch
# torch.__version__
# import torchvision
# torchvision.__version__
# Base setup:
# detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# common libraries
import numpy as np
import os, json, cv2, random
import matplotlib.pyplot as plt
# detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.structures import BoxMode
# ## Running model on a single frame
im = cv2.imread("/kaggle/working/input.jpg")
plt.figure(figsize=(15, 7.5))
plt.imshow(im[..., ::-1]) # bgr to rgb
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import torch
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import math
def get_persons_objects(instances):
pred_classes = instances.pred_classes
pred_boxes = instances.pred_boxes
pred_scores = instances.scores
new_boxes = Boxes(torch.tensor([]))
new_classes = torch.tensor([])
new_scores = torch.tensor([])
for i, t in enumerate(pred_classes):
if t.item() == 0:
new_classes = torch.cat((new_classes, t.unsqueeze(0).to("cpu:0")))
new_boxes = Boxes.cat((new_boxes, pred_boxes[i].to("cpu:0")))
new_scores = torch.cat(
(new_scores, pred_scores[i].unsqueeze(0).to("cpu:0"))
)
pred_classes = new_classes
pred_boxes = new_boxes
scores = new_scores
return pred_classes, pred_boxes, scores
cfg = get_cfg()
cfg.merge_from_file(
model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
)
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
)
predictor = DefaultPredictor(cfg)
outputs = predictor(im[..., ::-1])
pred_classes, pred_boxes, pred_scores = get_persons_objects(
outputs["instances"].to("cpu")
)
instances = Instances(
image_size=im.shape[:2],
pred_boxes=pred_boxes,
pred_classes=pred_classes.int(),
)
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(instances.to("cpu"))
plt.figure(figsize=(15, 7.5))
plt.imshow(out.get_image())
# #### Running the same code over 10 different frames from the video
#
input_file = "video5.mp4"
print("Execution starts....")
# Open the input video file
input_video = cv2.VideoCapture(input_file)
detections = np.empty((0, 5))
frame_count = 0
# Loop over the frames in the input video
while True:
# Read the next frame from the input video
ret, im = input_video.read()
if not ret:
break
print(f"Processing frame:{frame_count}", end=" | ")
outputs = predictor(im)
instances = outputs["instances"].to("cpu")
pred_classes, pred_boxes, scores = get_persons_objects(instances)
instances = Instances(
image_size=im.shape[:2],
pred_boxes=pred_boxes,
pred_classes=pred_classes.int(),
scores=scores,
)
v = Visualizer(
im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2
)
out = v.draw_instance_predictions(instances.to("cpu"))
plt.figure(figsize=(15, 7.5))
plt.imshow(out.get_image())
print("Total Person objects found: ", pred_classes.shape[0])
frame_count += 100 # each Nth frame
input_video.set(cv2.CAP_PROP_POS_FRAMES, frame_count)
print("ALL DONE!")
input_video.release()
# # Segmentation
# #### On a single frame
im2 = cv2.imread("/kaggle/working/input2.jpg")
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import torch
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import math
def get_persons_objects(instances):
pred_classes = instances.pred_classes
pred_boxes = instances.pred_boxes
pred_scores = instances.scores
new_boxes = Boxes(torch.tensor([]))
new_classes = torch.tensor([])
new_scores = torch.tensor([])
for i, t in enumerate(pred_classes):
if t.item() == 0:
new_classes = torch.cat((new_classes, t.unsqueeze(0).to("cpu:0")))
new_boxes = Boxes.cat((new_boxes, pred_boxes[i].to("cpu:0")))
new_scores = torch.cat(
(new_scores, pred_scores[i].unsqueeze(0).to("cpu:0"))
)
pred_classes = new_classes
pred_boxes = new_boxes
scores = new_scores
return pred_classes, pred_boxes, scores
cfg = get_cfg()
cfg.merge_from_file(
model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
)
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"
)
predictor = DefaultPredictor(cfg)
panoptic_seg, segments_info = predictor(im2)["panoptic_seg"]
v = Visualizer(im2[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_panoptic_seg_predictions(panoptic_seg.to("cpu"), segments_info)
plt.figure(figsize=(25, 15))
plt.imshow(out.get_image()[:, :, ::-1][..., ::-1])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/232/129232996.ipynb
| null | null |
[{"Id": 129232996, "ScriptId": 38420584, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1780352, "CreationDate": "05/12/2023 03:52:39", "VersionNumber": 1.0, "Title": "Detectron2 over video frames", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 206.0, "LinesInsertedFromPrevious": 151.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 55.0, "LinesInsertedFromFork": 151.0, "LinesDeletedFromFork": 367.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 55.0, "TotalVotes": 0}]
| null | null | null | null |
# Running on GPU:
# Helper function, used these for debugging purposes
# detector2 build only succeeds if CUDA version is correct
#!nvidia-smi
#!nvcc --version
# import torch
# torch.__version__
# import torchvision
# torchvision.__version__
# Base setup:
# detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# common libraries
import numpy as np
import os, json, cv2, random
import matplotlib.pyplot as plt
# detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.structures import BoxMode
# ## Running model on a single frame
im = cv2.imread("/kaggle/working/input.jpg")
plt.figure(figsize=(15, 7.5))
plt.imshow(im[..., ::-1]) # bgr to rgb
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import torch
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import math
def get_persons_objects(instances):
pred_classes = instances.pred_classes
pred_boxes = instances.pred_boxes
pred_scores = instances.scores
new_boxes = Boxes(torch.tensor([]))
new_classes = torch.tensor([])
new_scores = torch.tensor([])
for i, t in enumerate(pred_classes):
if t.item() == 0:
new_classes = torch.cat((new_classes, t.unsqueeze(0).to("cpu:0")))
new_boxes = Boxes.cat((new_boxes, pred_boxes[i].to("cpu:0")))
new_scores = torch.cat(
(new_scores, pred_scores[i].unsqueeze(0).to("cpu:0"))
)
pred_classes = new_classes
pred_boxes = new_boxes
scores = new_scores
return pred_classes, pred_boxes, scores
cfg = get_cfg()
cfg.merge_from_file(
model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
)
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
)
predictor = DefaultPredictor(cfg)
outputs = predictor(im[..., ::-1])
pred_classes, pred_boxes, pred_scores = get_persons_objects(
outputs["instances"].to("cpu")
)
instances = Instances(
image_size=im.shape[:2],
pred_boxes=pred_boxes,
pred_classes=pred_classes.int(),
)
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(instances.to("cpu"))
plt.figure(figsize=(15, 7.5))
plt.imshow(out.get_image())
# #### Running the same code over 10 different frames from the video
#
input_file = "video5.mp4"
print("Execution starts....")
# Open the input video file
input_video = cv2.VideoCapture(input_file)
detections = np.empty((0, 5))
frame_count = 0
# Loop over the frames in the input video
while True:
# Read the next frame from the input video
ret, im = input_video.read()
if not ret:
break
print(f"Processing frame:{frame_count}", end=" | ")
outputs = predictor(im)
instances = outputs["instances"].to("cpu")
pred_classes, pred_boxes, scores = get_persons_objects(instances)
instances = Instances(
image_size=im.shape[:2],
pred_boxes=pred_boxes,
pred_classes=pred_classes.int(),
scores=scores,
)
v = Visualizer(
im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2
)
out = v.draw_instance_predictions(instances.to("cpu"))
plt.figure(figsize=(15, 7.5))
plt.imshow(out.get_image())
print("Total Person objects found: ", pred_classes.shape[0])
frame_count += 100 # each Nth frame
input_video.set(cv2.CAP_PROP_POS_FRAMES, frame_count)
print("ALL DONE!")
input_video.release()
# # Segmentation
# #### On a single frame
im2 = cv2.imread("/kaggle/working/input2.jpg")
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import torch
from detectron2.structures import Boxes
import detectron2.structures.boxes as box_ops
from detectron2.structures import Boxes, Instances
import math
def get_persons_objects(instances):
pred_classes = instances.pred_classes
pred_boxes = instances.pred_boxes
pred_scores = instances.scores
new_boxes = Boxes(torch.tensor([]))
new_classes = torch.tensor([])
new_scores = torch.tensor([])
for i, t in enumerate(pred_classes):
if t.item() == 0:
new_classes = torch.cat((new_classes, t.unsqueeze(0).to("cpu:0")))
new_boxes = Boxes.cat((new_boxes, pred_boxes[i].to("cpu:0")))
new_scores = torch.cat(
(new_scores, pred_scores[i].unsqueeze(0).to("cpu:0"))
)
pred_classes = new_classes
pred_boxes = new_boxes
scores = new_scores
return pred_classes, pred_boxes, scores
cfg = get_cfg()
cfg.merge_from_file(
model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
)
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"
)
predictor = DefaultPredictor(cfg)
panoptic_seg, segments_info = predictor(im2)["panoptic_seg"]
v = Visualizer(im2[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_panoptic_seg_predictions(panoptic_seg.to("cpu"), segments_info)
plt.figure(figsize=(25, 15))
plt.imshow(out.get_image()[:, :, ::-1][..., ::-1])
| false | 0 | 1,744 | 0 | 1,744 | 1,744 |
||
129348580
|
<jupyter_start><jupyter_text>Photovoltaic System O&M inspection
According to [one of the three articles](https://onlinelibrary.wiley.com/doi/10.1002/pip.3564) that explains how [PV-HAWK](https://lukasbommes.github.io/PV-Hawk/index.html) ([MIT License](https://github.com/LukasBommes/PV-Hawk/blob/master/LICENSE)) tool works, five different PV plants were used to train one of the models used by this tool. The plants were named A, B, C, D and E for anonymization purposes. This dataset is a sample from the first 12 arrays of PV plant A.
---
# 1. Context
Both large and small photovoltaic systems are susceptible to failures in their equipment, especially in modules due to operational stresses that are exposed and errors during the installation process of these devices. Although numerous internal and external factors originate these failures, the common phenomenon presented by several of them is hot spots on module defective area. The immediate impact is perceptible in the reduction of the generated power and, in the long term, in the reduction of the useful life of the equipment due to the high temperatures presented. The preventive maintenance method for recognizing this phenomenon is the use of thermography images in inspections of photovoltaic modules. Through this procedure, faulty modules are immediately identified with failures at an early stage due to their high heat signatures compared to the others, captured by cameras with infrared sensors. Currently, the use of this type of camera attached to drones stands out for providing an increase in the inspection area and a reduction in its execution time.
To understand more about this, read these reports by International energy agency (IEA):
- [ Review of failures of PV modules](https://iea-pvps.org/wp-content/uploads/2020/01/IEA-PVPS_T13-01_2014_Review_of_Failures_of_Photovoltaic_Modules_Final.pdf);
- [Review of IR and EL images applications for PV systems](https://iea-pvps.org/wp-content/uploads/2020/01/Review_on_IR_and_EL_Imaging_for_PV_Field_Applications_by_Task_13.pdf).
## 1.1 Photovoltaic system specifications
Acording to the [dataset article](https://onlinelibrary.wiley.com/doi/10.1002/pip.3564), the photovoltaic system on which the thermographic inspection was carried out is located in Germany and it's composed of 2376 PV polycrystalline silicon modules, measuring 1650 x 992 mm (60-cell) each.
The images in this dataset refer to the region marked in red in the google maps screenshot of the photovoltaic system location.
<br>

<br>
## 1.2 Thermal inspection specifications
The inspection took place under clearsky conditions and solar irradiance above 700 W/m². In the table bellow more detail are presented for the weather parameters that influence thermography inspections.
| Number of modules | Distance (m) | Peak velocity (m/s) | Air Temperature (ºC)| Global radiation (J/cm²)| Wind speed (m/s)|
| --- | --- | -- | --- | --- | --- |
| 13640 | 7612 | 4.1 | 25 | 39.7 | 2.8 |
The drone used was a DJI model MATRICE 210 coupled with a FLIR XT2 thermal camera and with the following specifications:
- Thermal resolution of 640x512 pixels;
- Visual resolution of 1280x720 pixels;
- Focal length of 13 mm;
- Frame rate of 8 Hz .
The drone was controlled manually, positioned at an altitude of 10 m to 30 m from the ground with a velocity that ensures blur-free images. The camera orientation was facing vertically downwards (nadir) at all times.
Aiming at reducing inspection cost and duration, especially for increasing the drone range before a battery change is needed, the images were sequentially scanned considering two types of PV arrays layouts: only one single array appears in the image and then two arrays.
<br>

<br>

<br>
As showed in the table bellow, scanning two rows simultaneously speeds up the flight duration by a factor of 2.1, decreases flight distance by a factor of 1.9 and increases module throughput by a factor of 2.09. Despite these benefits, the resolution of extracted PV module images reduces.
| Inspection layout | Flight distance (m) | Flight duration (s) | Average module resolution (px) |Module throughput (1/s) |
| --- | --- | --- | --- | --- |
| Single row | 1307 | 707 | 141 X 99 | 3.36 |
| Double row | 681 | 338 | 73 X 50 | 7.03 |
## 1.3 Dataset organization
The images are separated by type of inspection in different folders (single or double rows). In each folder, there are thermographic images in TIFF format and a CSV file with drone's geospatial and temporal data during the inspection. Only for the double row inspection type that visual (RGB) images were acquired.
Besides, I've uploaded files to use for calibrate infrared and visual cameras to correct any type of distortion that camera lenses cause.
# 2. Resources
- This guides by [FLIR](http://support.flir.com/appstories/AppStories/Electrical&Mechanical/Testing_solar_panels_EN.pdf) and [TESTO](https://www.murcal.com/pdf%20folder/15.testo_thermography_guide.pdf) companies are good resources to understand more about thermography in the solar modules context;
- There's [another one by FLIR](https://thermalcapture.com/wp-content/uploads/2019/08/pv-system-inspection-thermal-drones-07-15-19.pdf) that explains in depth how aerial thermal inspections of photovoltaic systems are made and their importance in this field;
- To understand the level of influence that the module degradation has on the yield of the photovoltaic system you can read the [IEC TS-62446-3]( https://ayscomdatatec.com/wp-content/uploads/2019/09/Normativa-IEC-TS-62446-3.pdf) and the [Raptor maps's knoledge hub](https://raptormaps.com/solar-tech-docs/).
# 3. Inspiration
A service often provided by companies in this area is a SaaS that displays the detected faulty modules in an bird's eye view of the photovoltaic system and calculate the energy loss, like the image bellow shows. One can create a web app (using streamlit or plotly/dash) that detect PV modules with a instance segmentation model, track them with a object tracker and classify their integrity (binary or multiclass classification) with a image classification model.
<br>

<br>
This idea can be used for guiding a maintenance team in order to intervene and replace panels if necessary.
Kaggle dataset identifier: photovoltaic-system-o-and-m-inspection
<jupyter_script>import pickle
from pathlib import Path
import tifffile as tif
import cv2
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pandas as pd
# # 1. Dataset Reading
# The best way is to first read the metadata files from both datasets
SINGLE_ROW_METADATA_PATH = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/single-row/metadata.csv"
columns_to_rename = {"thermal image name": "thermal_image_name"}
sr_metadata = pd.read_csv(SINGLE_ROW_METADATA_PATH).rename(columns=columns_to_rename)
sr_metadata.head()
DOUBLE_ROW_METADATA_PATH = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/double-row/metadata.csv"
columns_to_rename = {
"thermal image name": "thermal_image_name",
"rgb image name": "rgb_image_name",
}
dr_metadata = pd.read_csv(DOUBLE_ROW_METADATA_PATH).rename(columns=columns_to_rename)
dr_metadata.head()
# We need to get the full path for the images
def get_image_full_path(image_name, image_type):
if image_type == "single_row_thermal":
origin_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/single-row/thermal images"
elif image_type == "double_row_thermal":
origin_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/double-row/thermal images"
elif image_type == "double_row_rgb":
origin_path = (
origin_path
) = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/double-row/rgb images"
return Path(origin_path, image_name)
sr_metadata = sr_metadata.assign(
thermal_image_name=sr_metadata.thermal_image_name.apply(
lambda x: get_image_full_path(x, "single_row_thermal")
)
).assign(timestamp=pd.to_datetime(sr_metadata.timestamp))
dr_metadata = (
dr_metadata.assign(
thermal_image_name=dr_metadata.thermal_image_name.apply(
lambda x: get_image_full_path(x, "double_row_thermal")
)
)
.assign(
rgb_image_name=dr_metadata.rgb_image_name.apply(
lambda x: get_image_full_path(x, "double_row_rgb")
)
)
.assign(timestamp=pd.to_datetime(sr_metadata.timestamp))
)
# **Now we can load the images!**
# I've created the Thermogram class just to be possible to get the thermal image and the converted one in the same object like [Flyr library](https://bitbucket.org/nimmerwoner/flyr/src/master/) does
class Thermogram:
def __init__(self, path: Path):
self.path = path
@property
def celsius(self) -> np.ndarray:
return (tif.imread(self.path.as_posix()) * 0.04) - 273.15
def render(self) -> np.ndarray:
image = self.celsius
image = (image - np.min(image)) / (np.max(image) - np.min(image))
return (image * 255.0).astype(np.uint8)
def load_image(image_path: Path):
image_format = image_path.suffix
if image_format == ".jpg":
return cv2.imread(image_path.as_posix())
elif image_format == ".tiff":
return Thermogram(image_path)
image_number = 57
thermogram = load_image(sr_metadata.thermal_image_name[image_number])
_, ax = plt.subplots(1, 2)
im = ax[0].imshow(thermogram.celsius, cmap="inferno")
ax[0].set_title("Thermography image")
ax[0].set_axis_off()
ax[1].imshow(thermogram.render(), cmap="gray")
ax[1].set_title("Rendered image (8 bit image)")
ax[1].set_axis_off()
cax = make_axes_locatable(ax[0]).append_axes("right", size="5%", pad=0.05)
plt.colorbar(
im, cax=cax, values=np.unique(thermogram.celsius), label="Temperature (ºC)"
)
plt.tight_layout()
plt.show()
thermogram = load_image(dr_metadata.thermal_image_name[image_number])
visual = load_image(dr_metadata.rgb_image_name[image_number])
_, ax = plt.subplots(1, 3, figsize=(10, 5))
im = ax[0].imshow(thermogram.celsius, cmap="inferno")
ax[0].set_title("Thermography image")
ax[0].set_axis_off()
ax[1].imshow(thermogram.render(), cmap="gray")
ax[1].set_title("Rendered image (8 bit image)")
ax[1].set_axis_off()
ax[2].imshow(visual[:, :, ::-1])
ax[2].set_title("Visual image")
ax[2].set_axis_off()
cax = make_axes_locatable(ax[0]).append_axes("right", size="5%", pad=0.05)
plt.colorbar(
im, cax=cax, values=np.unique(thermogram.celsius), label="Temperature (ºC)"
)
plt.tight_layout()
plt.show()
# # 2. Camera calibration
# This step is important because often times the lenses of cameras create distortions in the images. In this dataset only the RGB ones were affected, but the intrinsic and extrinsic camera parameters from the IR camera can be used for other tasks like Structure from motion (as PV-HAWK does).
def remove_distortion(image: np.ndarray, image_type: str = "rgb"):
mapx_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/RGB/mapx.pkl"
mapy_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/RGB/mapy.pkl"
if image_type == "ir":
mapx_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/IR/mapx.pkl"
mapy_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/IR/mapy.pkl"
with open(mapx_path, "rb") as mapx_file, open(mapy_path, "rb") as mapy_file:
mapx = pickle.load(mapx_file)
mapy = pickle.load(mapy_file)
return cv2.remap(
image, mapx, mapy, cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE
)
undistorted_rgb = remove_distortion(visual)
_, ax = plt.subplots(1, 2, figsize=(20, 5))
ax[0].imshow(visual[:, :, ::-1])
ax[0].set_title("RGB distorted")
ax[0].set_axis_off()
ax[1].imshow(undistorted_rgb[:, :, ::-1])
ax[1].set_title("RGB undistorted")
ax[1].set_axis_off()
plt.tight_layout()
plt.show()
undistorted_ir = remove_distortion(thermogram.render(), "ir")
_, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(thermogram.render(), cmap="gray")
ax[0].set_title("IR distorted")
ax[0].set_axis_off()
ax[1].imshow(undistorted_ir, cmap="gray")
ax[1].set_title("IR undistorted")
ax[1].set_axis_off()
plt.tight_layout()
plt.show()
# # 3. Images Alignment
# The best way of align the two images is by using a feature extractor and descriptor
# like ORB together with ransac to determine the correlation points between them instead of this. But something odd is the fact that although the thermographic image has a lower resolution than the RGB one, it has a larger field of view because more modules appear in it
from PIL import Image
def get_position_for_image_fusion(fg_shape, bg_shape):
bg_height = bg_shape[0] // 2
bg_width = bg_shape[1] // 2
fg_height = fg_shape[0] // 2
fg_width = fg_shape[1] // 2
return bg_width - fg_width, bg_height - fg_height
position = get_position_for_image_fusion(undistorted_ir.shape, undistorted_rgb.shape)
fg_image = Image.fromarray(undistorted_ir)
bg_image = Image.fromarray(undistorted_rgb[:, :, ::-1])
back_image = bg_image.copy()
back_image.paste(fg_image, position)
back_image
# # 4. Thermal Inspection
# Is possible to determine parameters like the inspection time and drone path
sr_inspection_time = sr_metadata.timestamp.filter([0, 5294]).diff().iloc[1].seconds
dr_inspection_time = dr_metadata.timestamp.filter([0, 2540]).diff().iloc[1].seconds
print(
f"Single row inspection time: {sr_inspection_time // 60} minutes and {sr_inspection_time % 60} seconds"
)
print(
f"Double row inspection time: {dr_inspection_time // 60} minutes and {dr_inspection_time % 60} seconds"
)
_, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].scatter(sr_metadata.longitude, sr_metadata.latitude)
ax[0].set_title("Single row inspection")
ax[1].scatter(dr_metadata.longitude, dr_metadata.latitude)
ax[1].set_title("Double row inspection")
plt.show()
# # 5. Defected modules
# Highlight defected modules when the modules masks were created
# # 6. Thermal image orthomosaic¶
#
# Show how to make a thermal orthomosaic with this dataset
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/348/129348580.ipynb
|
photovoltaic-system-o-and-m-inspection
|
marcosgabriel
|
[{"Id": 129348580, "ScriptId": 38452073, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5048762, "CreationDate": "05/13/2023 02:56:13", "VersionNumber": 2.0, "Title": "[DATASET INTRO] Photovoltaic System O&M inspection", "EvaluationDate": "05/13/2023", "IsChange": false, "TotalLines": 218.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 218.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185309654, "KernelVersionId": 129348580, "SourceDatasetVersionId": 5672295}]
|
[{"Id": 5672295, "DatasetId": 3256284, "DatasourceVersionId": 5747827, "CreatorUserId": 5048762, "LicenseName": "Other (specified in description)", "CreationDate": "05/12/2023 19:53:23", "VersionNumber": 2.0, "Title": "Photovoltaic System O&M inspection", "Slug": "photovoltaic-system-o-and-m-inspection", "Subtitle": "Thermal and RGB images from inspection of a photovoltaic system", "Description": "According to [one of the three articles](https://onlinelibrary.wiley.com/doi/10.1002/pip.3564) that explains how [PV-HAWK](https://lukasbommes.github.io/PV-Hawk/index.html) ([MIT License](https://github.com/LukasBommes/PV-Hawk/blob/master/LICENSE)) tool works, five different PV plants were used to train one of the models used by this tool. The plants were named A, B, C, D and E for anonymization purposes. This dataset is a sample from the first 12 arrays of PV plant A.\n\n---\n\n# 1. Context\n\nBoth large and small photovoltaic systems are susceptible to failures in their equipment, especially in modules due to operational stresses that are exposed and errors during the installation process of these devices. Although numerous internal and external factors originate these failures, the common phenomenon presented by several of them is hot spots on module defective area. The immediate impact is perceptible in the reduction of the generated power and, in the long term, in the reduction of the useful life of the equipment due to the high temperatures presented. The preventive maintenance method for recognizing this phenomenon is the use of thermography images in inspections of photovoltaic modules. Through this procedure, faulty modules are immediately identified with failures at an early stage due to their high heat signatures compared to the others, captured by cameras with infrared sensors. Currently, the use of this type of camera attached to drones stands out for providing an increase in the inspection area and a reduction in its execution time.\n\nTo understand more about this, read these reports by International energy agency (IEA):\n- [ Review of failures of PV modules](https://iea-pvps.org/wp-content/uploads/2020/01/IEA-PVPS_T13-01_2014_Review_of_Failures_of_Photovoltaic_Modules_Final.pdf);\n- [Review of IR and EL images applications for PV systems](https://iea-pvps.org/wp-content/uploads/2020/01/Review_on_IR_and_EL_Imaging_for_PV_Field_Applications_by_Task_13.pdf).\n\n## 1.1 Photovoltaic system specifications\n\nAcording to the [dataset article](https://onlinelibrary.wiley.com/doi/10.1002/pip.3564), the photovoltaic system on which the thermographic inspection was carried out is located in Germany and it's composed of 2376 PV polycrystalline silicon modules, measuring 1650 x 992 mm (60-cell) each.\n\nThe images in this dataset refer to the region marked in red in the google maps screenshot of the photovoltaic system location.\n\n<br>\n\n\n\n<br>\n\n## 1.2 Thermal inspection specifications\n\nThe inspection took place under clearsky conditions and solar irradiance above 700\u2009W/m\u00b2. In the table bellow more detail are presented for the weather parameters that influence thermography inspections.\n\n| Number of modules | Distance (m) | Peak velocity (m/s) | Air Temperature (\u00baC)| Global radiation (J/cm\u00b2)| Wind speed (m/s)|\n| --- | --- | -- | --- | --- | --- |\n| 13640 | 7612 | 4.1 | 25 | 39.7 | 2.8 |\n\n The drone used was a DJI model MATRICE 210 coupled with a FLIR XT2 thermal camera and with the following specifications: \n\n- Thermal resolution of 640x512 pixels;\n- Visual resolution of 1280x720 pixels;\n- Focal length of 13 mm;\n- Frame rate of 8 Hz .\n\nThe drone was controlled manually, positioned at an altitude of 10 m to 30\u2009m from the ground with a velocity that ensures blur-free images. The camera orientation was facing vertically downwards (nadir) at all times.\n\nAiming at reducing inspection cost and duration, especially for increasing the drone range before a battery change is needed, the images were sequentially scanned considering two types of PV arrays layouts: only one single array appears in the image and then two arrays.\n\n<br>\n\n\n\n<br>\n\n\n\n<br>\n\nAs showed in the table bellow, scanning two rows simultaneously speeds up the flight duration by a factor of 2.1, decreases flight distance by a factor of 1.9 and increases module throughput by a factor of 2.09. Despite these benefits, the resolution of extracted PV module images reduces.\n\n| Inspection layout | Flight distance (m) | Flight duration (s) | Average module resolution (px) |Module throughput (1/s) |\n| --- | --- | --- | --- | --- |\n| Single row | 1307 | 707 | 141 X 99 | 3.36 |\n| Double row | 681 | 338 | 73 X 50 | 7.03 |\n\n## 1.3 Dataset organization\n\nThe images are separated by type of inspection in different folders (single or double rows). In each folder, there are thermographic images in TIFF format and a CSV file with drone's geospatial and temporal data during the inspection. Only for the double row inspection type that visual (RGB) images were acquired.\n\nBesides, I've uploaded files to use for calibrate infrared and visual cameras to correct any type of distortion that camera lenses cause. \n\n# 2. Resources\n\n- This guides by [FLIR](http://support.flir.com/appstories/AppStories/Electrical&Mechanical/Testing_solar_panels_EN.pdf) and [TESTO](https://www.murcal.com/pdf%20folder/15.testo_thermography_guide.pdf) companies are good resources to understand more about thermography in the solar modules context;\n\n- There's [another one by FLIR](https://thermalcapture.com/wp-content/uploads/2019/08/pv-system-inspection-thermal-drones-07-15-19.pdf) that explains in depth how aerial thermal inspections of photovoltaic systems are made and their importance in this field;\n\n- To understand the level of influence that the module degradation has on the yield of the photovoltaic system you can read the [IEC TS-62446-3]( https://ayscomdatatec.com/wp-content/uploads/2019/09/Normativa-IEC-TS-62446-3.pdf) and the [Raptor maps's knoledge hub](https://raptormaps.com/solar-tech-docs/).\n\n# 3. Inspiration\n\nA service often provided by companies in this area is a SaaS that displays the detected faulty modules in an bird's eye view of the photovoltaic system and calculate the energy loss, like the image bellow shows. One can create a web app (using streamlit or plotly/dash) that detect PV modules with a instance segmentation model, track them with a object tracker and classify their integrity (binary or multiclass classification) with a image classification model.\n\n<br>\n\n\n\n<br>\n\nThis idea can be used for guiding a maintenance team in order to intervene and replace panels if necessary.", "VersionNotes": "Data Update 2023-05-12", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3256284, "CreatorUserId": 5048762, "OwnerUserId": 5048762.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5672295.0, "CurrentDatasourceVersionId": 5747827.0, "ForumId": 3321771, "Type": 2, "CreationDate": "05/11/2023 18:26:40", "LastActivityDate": "05/11/2023", "TotalViews": 1219, "TotalDownloads": 142, "TotalVotes": 1, "TotalKernels": 1}]
|
[{"Id": 5048762, "UserName": "marcosgabriel", "DisplayName": "Marcos Gabriel", "RegisterDate": "05/08/2020", "PerformanceTier": 0}]
|
import pickle
from pathlib import Path
import tifffile as tif
import cv2
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pandas as pd
# # 1. Dataset Reading
# The best way is to first read the metadata files from both datasets
SINGLE_ROW_METADATA_PATH = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/single-row/metadata.csv"
columns_to_rename = {"thermal image name": "thermal_image_name"}
sr_metadata = pd.read_csv(SINGLE_ROW_METADATA_PATH).rename(columns=columns_to_rename)
sr_metadata.head()
DOUBLE_ROW_METADATA_PATH = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/double-row/metadata.csv"
columns_to_rename = {
"thermal image name": "thermal_image_name",
"rgb image name": "rgb_image_name",
}
dr_metadata = pd.read_csv(DOUBLE_ROW_METADATA_PATH).rename(columns=columns_to_rename)
dr_metadata.head()
# We need to get the full path for the images
def get_image_full_path(image_name, image_type):
if image_type == "single_row_thermal":
origin_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/single-row/thermal images"
elif image_type == "double_row_thermal":
origin_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/double-row/thermal images"
elif image_type == "double_row_rgb":
origin_path = (
origin_path
) = "/kaggle/input/photovoltaic-system-o-and-m-inspection/datasets/double-row/rgb images"
return Path(origin_path, image_name)
sr_metadata = sr_metadata.assign(
thermal_image_name=sr_metadata.thermal_image_name.apply(
lambda x: get_image_full_path(x, "single_row_thermal")
)
).assign(timestamp=pd.to_datetime(sr_metadata.timestamp))
dr_metadata = (
dr_metadata.assign(
thermal_image_name=dr_metadata.thermal_image_name.apply(
lambda x: get_image_full_path(x, "double_row_thermal")
)
)
.assign(
rgb_image_name=dr_metadata.rgb_image_name.apply(
lambda x: get_image_full_path(x, "double_row_rgb")
)
)
.assign(timestamp=pd.to_datetime(sr_metadata.timestamp))
)
# **Now we can load the images!**
# I've created the Thermogram class just to be possible to get the thermal image and the converted one in the same object like [Flyr library](https://bitbucket.org/nimmerwoner/flyr/src/master/) does
class Thermogram:
def __init__(self, path: Path):
self.path = path
@property
def celsius(self) -> np.ndarray:
return (tif.imread(self.path.as_posix()) * 0.04) - 273.15
def render(self) -> np.ndarray:
image = self.celsius
image = (image - np.min(image)) / (np.max(image) - np.min(image))
return (image * 255.0).astype(np.uint8)
def load_image(image_path: Path):
image_format = image_path.suffix
if image_format == ".jpg":
return cv2.imread(image_path.as_posix())
elif image_format == ".tiff":
return Thermogram(image_path)
image_number = 57
thermogram = load_image(sr_metadata.thermal_image_name[image_number])
_, ax = plt.subplots(1, 2)
im = ax[0].imshow(thermogram.celsius, cmap="inferno")
ax[0].set_title("Thermography image")
ax[0].set_axis_off()
ax[1].imshow(thermogram.render(), cmap="gray")
ax[1].set_title("Rendered image (8 bit image)")
ax[1].set_axis_off()
cax = make_axes_locatable(ax[0]).append_axes("right", size="5%", pad=0.05)
plt.colorbar(
im, cax=cax, values=np.unique(thermogram.celsius), label="Temperature (ºC)"
)
plt.tight_layout()
plt.show()
thermogram = load_image(dr_metadata.thermal_image_name[image_number])
visual = load_image(dr_metadata.rgb_image_name[image_number])
_, ax = plt.subplots(1, 3, figsize=(10, 5))
im = ax[0].imshow(thermogram.celsius, cmap="inferno")
ax[0].set_title("Thermography image")
ax[0].set_axis_off()
ax[1].imshow(thermogram.render(), cmap="gray")
ax[1].set_title("Rendered image (8 bit image)")
ax[1].set_axis_off()
ax[2].imshow(visual[:, :, ::-1])
ax[2].set_title("Visual image")
ax[2].set_axis_off()
cax = make_axes_locatable(ax[0]).append_axes("right", size="5%", pad=0.05)
plt.colorbar(
im, cax=cax, values=np.unique(thermogram.celsius), label="Temperature (ºC)"
)
plt.tight_layout()
plt.show()
# # 2. Camera calibration
# This step is important because often times the lenses of cameras create distortions in the images. In this dataset only the RGB ones were affected, but the intrinsic and extrinsic camera parameters from the IR camera can be used for other tasks like Structure from motion (as PV-HAWK does).
def remove_distortion(image: np.ndarray, image_type: str = "rgb"):
mapx_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/RGB/mapx.pkl"
mapy_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/RGB/mapy.pkl"
if image_type == "ir":
mapx_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/IR/mapx.pkl"
mapy_path = "/kaggle/input/photovoltaic-system-o-and-m-inspection/calibration files/IR/mapy.pkl"
with open(mapx_path, "rb") as mapx_file, open(mapy_path, "rb") as mapy_file:
mapx = pickle.load(mapx_file)
mapy = pickle.load(mapy_file)
return cv2.remap(
image, mapx, mapy, cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE
)
undistorted_rgb = remove_distortion(visual)
_, ax = plt.subplots(1, 2, figsize=(20, 5))
ax[0].imshow(visual[:, :, ::-1])
ax[0].set_title("RGB distorted")
ax[0].set_axis_off()
ax[1].imshow(undistorted_rgb[:, :, ::-1])
ax[1].set_title("RGB undistorted")
ax[1].set_axis_off()
plt.tight_layout()
plt.show()
undistorted_ir = remove_distortion(thermogram.render(), "ir")
_, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(thermogram.render(), cmap="gray")
ax[0].set_title("IR distorted")
ax[0].set_axis_off()
ax[1].imshow(undistorted_ir, cmap="gray")
ax[1].set_title("IR undistorted")
ax[1].set_axis_off()
plt.tight_layout()
plt.show()
# # 3. Images Alignment
# The best way of align the two images is by using a feature extractor and descriptor
# like ORB together with ransac to determine the correlation points between them instead of this. But something odd is the fact that although the thermographic image has a lower resolution than the RGB one, it has a larger field of view because more modules appear in it
from PIL import Image
def get_position_for_image_fusion(fg_shape, bg_shape):
bg_height = bg_shape[0] // 2
bg_width = bg_shape[1] // 2
fg_height = fg_shape[0] // 2
fg_width = fg_shape[1] // 2
return bg_width - fg_width, bg_height - fg_height
position = get_position_for_image_fusion(undistorted_ir.shape, undistorted_rgb.shape)
fg_image = Image.fromarray(undistorted_ir)
bg_image = Image.fromarray(undistorted_rgb[:, :, ::-1])
back_image = bg_image.copy()
back_image.paste(fg_image, position)
back_image
# # 4. Thermal Inspection
# Is possible to determine parameters like the inspection time and drone path
sr_inspection_time = sr_metadata.timestamp.filter([0, 5294]).diff().iloc[1].seconds
dr_inspection_time = dr_metadata.timestamp.filter([0, 2540]).diff().iloc[1].seconds
print(
f"Single row inspection time: {sr_inspection_time // 60} minutes and {sr_inspection_time % 60} seconds"
)
print(
f"Double row inspection time: {dr_inspection_time // 60} minutes and {dr_inspection_time % 60} seconds"
)
_, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].scatter(sr_metadata.longitude, sr_metadata.latitude)
ax[0].set_title("Single row inspection")
ax[1].scatter(dr_metadata.longitude, dr_metadata.latitude)
ax[1].set_title("Double row inspection")
plt.show()
# # 5. Defected modules
# Highlight defected modules when the modules masks were created
# # 6. Thermal image orthomosaic¶
#
# Show how to make a thermal orthomosaic with this dataset
| false | 0 | 2,663 | 0 | 4,857 | 2,663 |
||
129238175
|
<jupyter_start><jupyter_text>pracdataset
Kaggle dataset identifier: pracdataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
data = pd.read_csv("/kaggle/input/pracdataset/practicedataset.csv")
data
data = data.drop("Category", axis=1)
data.head()
data.info()
data.shape
data.columns
# data['Class'].replace('Benign',1,inplace=True)
# data['Class'].replace('Malware',0,inplace=True)
data
data["Class"].unique()
x_col = data.columns.to_list()
x_col.pop(-1)
y_col = "Class"
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(
data[x_col], data[y_col].values, test_size=0.1
)
train_x.shape, test_x.shape, train_y.shape, test_y.shape
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
rf = RandomForestClassifier()
rf.fit(train_x, train_y)
test_y_pred = rf.predict(test_x)
print("Testing Accuracy:", accuracy_score(test_y, test_y_pred))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/238/129238175.ipynb
|
pracdataset
|
saidevansh
|
[{"Id": 129238175, "ScriptId": 38422845, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11551086, "CreationDate": "05/12/2023 05:07:45", "VersionNumber": 1.0, "Title": "last_model", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 56.0, "LinesInsertedFromPrevious": 56.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185109197, "KernelVersionId": 129238175, "SourceDatasetVersionId": 5502878}]
|
[{"Id": 5502878, "DatasetId": 3174517, "DatasourceVersionId": 5577274, "CreatorUserId": 12364005, "LicenseName": "Unknown", "CreationDate": "04/24/2023 04:41:01", "VersionNumber": 1.0, "Title": "pracdataset", "Slug": "pracdataset", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3174517, "CreatorUserId": 12364005, "OwnerUserId": 12364005.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5502878.0, "CurrentDatasourceVersionId": 5577274.0, "ForumId": 3238770, "Type": 2, "CreationDate": "04/24/2023 04:41:01", "LastActivityDate": "04/24/2023", "TotalViews": 76, "TotalDownloads": 6, "TotalVotes": 0, "TotalKernels": 2}]
|
[{"Id": 12364005, "UserName": "saidevansh", "DisplayName": "Sai Devansh", "RegisterDate": "11/12/2022", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
data = pd.read_csv("/kaggle/input/pracdataset/practicedataset.csv")
data
data = data.drop("Category", axis=1)
data.head()
data.info()
data.shape
data.columns
# data['Class'].replace('Benign',1,inplace=True)
# data['Class'].replace('Malware',0,inplace=True)
data
data["Class"].unique()
x_col = data.columns.to_list()
x_col.pop(-1)
y_col = "Class"
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(
data[x_col], data[y_col].values, test_size=0.1
)
train_x.shape, test_x.shape, train_y.shape, test_y.shape
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
rf = RandomForestClassifier()
rf.fit(train_x, train_y)
test_y_pred = rf.predict(test_x)
print("Testing Accuracy:", accuracy_score(test_y, test_y_pred))
| false | 1 | 481 | 0 | 501 | 481 |
||
129238337
|
<jupyter_start><jupyter_text>Uber Request Data.csv
### Context
This dataset is a part of assignment given by IIITB and Upgrad for Data Science Course.
### Content
This data set is a masked data set which is similar to what data analysts at Uber handle.
Kaggle dataset identifier: uber-request-data
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/uber-request-data/Uber Request Data.csv")
df.head()
df.tail()
df.describe()
df.shape
df.info()
# Convert Request_timestamp to uniform datetime format
df["Request timestamp"] = pd.to_datetime(df["Request timestamp"])
df["Drop timestamp"] = pd.to_datetime(df["Drop timestamp"])
df.info()
df.isnull().sum()
df.Status.value_counts()
# # 1. Which date had the most completed trip durig the two week period?
# Calculate the duration of each trip in minutes
df["trip_duration"] = (
df["Drop timestamp"] - df["Request timestamp"]
).dt.total_seconds() / 60
# Add a new column 'is_completed' that indicates whether a trip is completed or not
df["is_completed"] = df["Status"].apply(lambda x: 1 if x == "Trip Completed" else 0)
# Group the data by date and calculate the number of completed trips and the mean of trip duration on each date
completed_trips_by_date = (
df[df["is_completed"] == 1]
.groupby(pd.Grouper(key="Request timestamp", freq="1D"))
.agg({"is_completed": "sum", "trip_duration": "mean"})
)
# Find the date with the highest number of completed trips and the mean of completed trip duration on that date
max_completed_trips_date = completed_trips_by_date["is_completed"].idxmax()
max_completed_trips = completed_trips_by_date["is_completed"].max()
mean_trip_duration = completed_trips_by_date.loc[
max_completed_trips_date, "trip_duration"
]
print("The date with the most completed trips is:", max_completed_trips_date)
print("The number of completed trips on that date is:", max_completed_trips)
print("The mean of completed trip duration on that date is:", mean_trip_duration)
import matplotlib.pyplot as plt
import seaborn as sns
# Group the data by hour and calculate the number of completed trips in each hour
completed_trips_by_hour = (
df[df["is_completed"] == 1]
.groupby(pd.Grouper(key="Request timestamp", freq="1H"))
.sum()["is_completed"]
)
# Calculate the daily total of completed trips
completed_trips_by_day = completed_trips_by_hour.resample("D").sum()
# Create a line plot of the completed trips over time
sns.lineplot(x=completed_trips_by_day.index, y=completed_trips_by_day.values)
plt.xlabel("Date")
plt.ylabel("Number of Completed Trips")
plt.title("Completed Trips over Time")
plt.show()
# ### Insights:
# - The date with the most completed trips is: 7th Nov, 2016
# - The date with second highest completed trips is: Dec, 2016
# - The number of completed trips on that date is: 601
# - The mean of completed trip duration on that date is: 1372.5707154742097
# # 2. What was the highest no. of completed trips within a 24 hour period?
# Extract the hour from requested timestamp
df["Request hour"] = df["Request timestamp"].dt.hour
df.head()
import matplotlib.pyplot as plt
import seaborn as sns
# Calculate the frequency of each hour
hour_freq = df["Request hour"].value_counts()
# Sort the frequencies in descending order
hour_freq_sorted = hour_freq.sort_values(ascending=False)
# Select the top 3 frequencies
top_3 = hour_freq_sorted.head(3)
# Create the histogram
plt.hist(df["Request hour"], edgecolor="RED", bins=24, color="blue")
plt.xlabel("Request hour")
plt.ylabel("No. of Requests")
# Loop through the top 3 frequencies and add a text label to the corresponding bar
for hour, freq in top_3.iteritems():
plt.text(hour, freq, str(freq), ha="center", va="bottom", color="Green")
plt.show()
df.columns
# Calculate the duration of each trip in minutes
df["trip_duration"] = (
df["Drop timestamp"] - df["Request timestamp"]
).dt.total_seconds() / 60
# Add a new column 'is_completed' that indicates whether a trip is completed or not
df["is_completed"] = df["Status"].apply(lambda x: 1 if x == "Trip Completed" else 0)
# Group the data by hour and calculate the number of completed trips in each hour
completed_trips_by_hour = (
df[df["is_completed"] == 1]
.groupby(pd.Grouper(key="Request timestamp", freq="1H"))
.sum()["is_completed"]
)
# Find the highest number of completed trips and the date when it occurred
max_completed_trips = completed_trips_by_hour.max()
max_completed_trips_date = completed_trips_by_hour.idxmax()
print(
"The highest number of completed trips within a 24-hour period is:",
max_completed_trips,
)
print(
"The date when the highest number of completed trips occurred is:",
max_completed_trips_date,
)
# Plot the number of completed trips by hour
completed_trips_by_hour.plot(kind="line")
# Set the plot title and axis labels
plt.title("Completed trips by hour")
plt.xlabel("Hour")
plt.ylabel("Number of completed trips")
# Show the plot
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/238/129238337.ipynb
|
uber-request-data
|
anupammajhi
|
[{"Id": 129238337, "ScriptId": 38419899, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11189402, "CreationDate": "05/12/2023 05:10:11", "VersionNumber": 1.0, "Title": "Uber Data Analysis", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 156.0, "LinesInsertedFromPrevious": 156.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185109488, "KernelVersionId": 129238337, "SourceDatasetVersionId": 182068}]
|
[{"Id": 182068, "DatasetId": 78953, "DatasourceVersionId": 192927, "CreatorUserId": 1125300, "LicenseName": "Unknown", "CreationDate": "11/17/2018 23:22:01", "VersionNumber": 1.0, "Title": "Uber Request Data.csv", "Slug": "uber-request-data", "Subtitle": "For Uber Supply Demand Gap - EDA", "Description": "### Context\n\nThis dataset is a part of assignment given by IIITB and Upgrad for Data Science Course.\n\n\n### Content\n\nThis data set is a masked data set which is similar to what data analysts at Uber handle.\n\n\n### Acknowledgements\n\nSources are taken from the PGD Data Science course from Upgrad", "VersionNotes": "Initial release", "TotalCompressedBytes": 395061.0, "TotalUncompressedBytes": 395061.0}]
|
[{"Id": 78953, "CreatorUserId": 1125300, "OwnerUserId": 1125300.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 182068.0, "CurrentDatasourceVersionId": 192927.0, "ForumId": 88310, "Type": 2, "CreationDate": "11/17/2018 23:22:01", "LastActivityDate": "11/17/2018", "TotalViews": 36167, "TotalDownloads": 4630, "TotalVotes": 32, "TotalKernels": 15}]
|
[{"Id": 1125300, "UserName": "anupammajhi", "DisplayName": "Anupam Majhi", "RegisterDate": "06/14/2017", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/uber-request-data/Uber Request Data.csv")
df.head()
df.tail()
df.describe()
df.shape
df.info()
# Convert Request_timestamp to uniform datetime format
df["Request timestamp"] = pd.to_datetime(df["Request timestamp"])
df["Drop timestamp"] = pd.to_datetime(df["Drop timestamp"])
df.info()
df.isnull().sum()
df.Status.value_counts()
# # 1. Which date had the most completed trip durig the two week period?
# Calculate the duration of each trip in minutes
df["trip_duration"] = (
df["Drop timestamp"] - df["Request timestamp"]
).dt.total_seconds() / 60
# Add a new column 'is_completed' that indicates whether a trip is completed or not
df["is_completed"] = df["Status"].apply(lambda x: 1 if x == "Trip Completed" else 0)
# Group the data by date and calculate the number of completed trips and the mean of trip duration on each date
completed_trips_by_date = (
df[df["is_completed"] == 1]
.groupby(pd.Grouper(key="Request timestamp", freq="1D"))
.agg({"is_completed": "sum", "trip_duration": "mean"})
)
# Find the date with the highest number of completed trips and the mean of completed trip duration on that date
max_completed_trips_date = completed_trips_by_date["is_completed"].idxmax()
max_completed_trips = completed_trips_by_date["is_completed"].max()
mean_trip_duration = completed_trips_by_date.loc[
max_completed_trips_date, "trip_duration"
]
print("The date with the most completed trips is:", max_completed_trips_date)
print("The number of completed trips on that date is:", max_completed_trips)
print("The mean of completed trip duration on that date is:", mean_trip_duration)
import matplotlib.pyplot as plt
import seaborn as sns
# Group the data by hour and calculate the number of completed trips in each hour
completed_trips_by_hour = (
df[df["is_completed"] == 1]
.groupby(pd.Grouper(key="Request timestamp", freq="1H"))
.sum()["is_completed"]
)
# Calculate the daily total of completed trips
completed_trips_by_day = completed_trips_by_hour.resample("D").sum()
# Create a line plot of the completed trips over time
sns.lineplot(x=completed_trips_by_day.index, y=completed_trips_by_day.values)
plt.xlabel("Date")
plt.ylabel("Number of Completed Trips")
plt.title("Completed Trips over Time")
plt.show()
# ### Insights:
# - The date with the most completed trips is: 7th Nov, 2016
# - The date with second highest completed trips is: Dec, 2016
# - The number of completed trips on that date is: 601
# - The mean of completed trip duration on that date is: 1372.5707154742097
# # 2. What was the highest no. of completed trips within a 24 hour period?
# Extract the hour from requested timestamp
df["Request hour"] = df["Request timestamp"].dt.hour
df.head()
import matplotlib.pyplot as plt
import seaborn as sns
# Calculate the frequency of each hour
hour_freq = df["Request hour"].value_counts()
# Sort the frequencies in descending order
hour_freq_sorted = hour_freq.sort_values(ascending=False)
# Select the top 3 frequencies
top_3 = hour_freq_sorted.head(3)
# Create the histogram
plt.hist(df["Request hour"], edgecolor="RED", bins=24, color="blue")
plt.xlabel("Request hour")
plt.ylabel("No. of Requests")
# Loop through the top 3 frequencies and add a text label to the corresponding bar
for hour, freq in top_3.iteritems():
plt.text(hour, freq, str(freq), ha="center", va="bottom", color="Green")
plt.show()
df.columns
# Calculate the duration of each trip in minutes
df["trip_duration"] = (
df["Drop timestamp"] - df["Request timestamp"]
).dt.total_seconds() / 60
# Add a new column 'is_completed' that indicates whether a trip is completed or not
df["is_completed"] = df["Status"].apply(lambda x: 1 if x == "Trip Completed" else 0)
# Group the data by hour and calculate the number of completed trips in each hour
completed_trips_by_hour = (
df[df["is_completed"] == 1]
.groupby(pd.Grouper(key="Request timestamp", freq="1H"))
.sum()["is_completed"]
)
# Find the highest number of completed trips and the date when it occurred
max_completed_trips = completed_trips_by_hour.max()
max_completed_trips_date = completed_trips_by_hour.idxmax()
print(
"The highest number of completed trips within a 24-hour period is:",
max_completed_trips,
)
print(
"The date when the highest number of completed trips occurred is:",
max_completed_trips_date,
)
# Plot the number of completed trips by hour
completed_trips_by_hour.plot(kind="line")
# Set the plot title and axis labels
plt.title("Completed trips by hour")
plt.xlabel("Hour")
plt.ylabel("Number of completed trips")
# Show the plot
plt.show()
| false | 1 | 1,574 | 0 | 1,653 | 1,574 |
||
129342514
|
<jupyter_start><jupyter_text>Coconut Leaf Dataset for Pest Identification
The dataset includes 5 types of coconut leaf diseases:
- Leaflets
- Caterpillars
- Yellowing
- Drying
- Flaccidity
Use the dataset to classify and predict pest-infected leaves to be made easy for agriculture.
Kaggle dataset identifier: coconut-leaf-dataset-for-pest-identification
<jupyter_script>import os
import shutil
import random
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# **Data preprocessing for Coconut Leaf Pest Identification :**
# This code performs the preprocessing of data required for training a model to identify pests in coconut leaves. It defines the directories for the training, validation, and test sets, splits the images into these sets based on a given ratio, and moves the images into the appropriate directories. The code shuffles the images in each class folder to ensure the randomness of the splits. Finally, it copies the image files from the source directory to the respective destination directories.
data_dir = "/kaggle/input/coconut-leaf-dataset-for-pest-identification/archive"
saved_data_dir = "/kaggle/working/"
train_dir = os.path.join(saved_data_dir, "train")
validation_dir = os.path.join(saved_data_dir, "validation")
test_dir = os.path.join(saved_data_dir, "test")
os.makedirs(train_dir, exist_ok=True)
os.makedirs(validation_dir, exist_ok=True)
os.makedirs(test_dir, exist_ok=True)
train_ratio = 0.7
validation_ratio = 0.15
test_ratio = 0.15
for class_name in [
"CCI_Caterpillars",
"CCI_Leaflets",
"WCLWD_DryingofLeaflets",
"WCLWD_Flaccidity",
"WCLWD_Yellowing",
]:
class_dir = os.path.join(data_dir, class_name)
files = os.listdir(class_dir)
random.shuffle(files)
train_split_idx = int(train_ratio * len(files))
validation_split_idx = int((train_ratio + validation_ratio) * len(files))
train_files = files[:train_split_idx]
validation_files = files[train_split_idx:validation_split_idx]
test_files = files[validation_split_idx:]
for filename in train_files:
src_path = os.path.join(class_dir, filename)
dst_path = os.path.join(train_dir, class_name, filename)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
shutil.copy(src_path, dst_path)
for filename in validation_files:
src_path = os.path.join(class_dir, filename)
dst_path = os.path.join(validation_dir, class_name, filename)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
shutil.copy(src_path, dst_path)
for filename in test_files:
src_path = os.path.join(class_dir, filename)
dst_path = os.path.join(test_dir, class_name, filename)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
shutil.copy(src_path, dst_path)
# **importing data :**
# Define the paths to the datasets
train_dir = "/kaggle/working/train"
validation_dir = "/kaggle/working/validation"
test_dir = "/kaggle/working/test"
# Define the input image dimensions
img_height = 224
img_width = 224
# Define the number of classes
num_classes = 5
# **Instantiate data generators for training, validation, and test sets :**
train_datagen = ImageDataGenerator(
rescale=1.0 / 255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode="nearest",
)
validation_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_height, img_width),
batch_size=28,
class_mode="categorical",
)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(img_height, img_width),
batch_size=32,
class_mode="categorical",
)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_height, img_width),
batch_size=32,
class_mode="categorical",
)
# **Define the model architecture :**
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(
32, (3, 3), activation="relu", input_shape=(img_height, img_width, 3)
),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(128, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(256, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(256, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(num_classes, activation="softmax"),
]
)
# **Compile the model :**
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# **Train the model :**
history = model.fit(train_generator, epochs=10, validation_data=validation_generator)
# **Evaluate the model on the test set :**
test_loss, test_acc = model.evaluate(test_generator)
print("Test accuracy:", test_acc)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/342/129342514.ipynb
|
coconut-leaf-dataset-for-pest-identification
|
shravanatirtha
|
[{"Id": 129342514, "ScriptId": 38454962, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14180659, "CreationDate": "05/13/2023 01:09:35", "VersionNumber": 1.0, "Title": "Classification using DCNN - 98% Accuracy", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 134.0, "LinesInsertedFromPrevious": 134.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 185297739, "KernelVersionId": 129342514, "SourceDatasetVersionId": 5384645}]
|
[{"Id": 5384645, "DatasetId": 3122487, "DatasourceVersionId": 5458309, "CreatorUserId": 7940226, "LicenseName": "Database: Open Database, Contents: \u00a9 Original Authors", "CreationDate": "04/12/2023 15:38:07", "VersionNumber": 1.0, "Title": "Coconut Leaf Dataset for Pest Identification", "Slug": "coconut-leaf-dataset-for-pest-identification", "Subtitle": "Use the dataset for Deep Learning Algorithms", "Description": "The dataset includes 5 types of coconut leaf diseases:\n- Leaflets\n- Caterpillars\n- Yellowing\n- Drying\n- Flaccidity\n\nUse the dataset to classify and predict pest-infected leaves to be made easy for agriculture.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3122487, "CreatorUserId": 7940226, "OwnerUserId": 7940226.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5384645.0, "CurrentDatasourceVersionId": 5458309.0, "ForumId": 3185992, "Type": 2, "CreationDate": "04/12/2023 15:38:07", "LastActivityDate": "04/12/2023", "TotalViews": 2027, "TotalDownloads": 153, "TotalVotes": 11, "TotalKernels": 2}]
|
[{"Id": 7940226, "UserName": "shravanatirtha", "DisplayName": "Shravana Tirtha", "RegisterDate": "07/20/2021", "PerformanceTier": 1}]
|
import os
import shutil
import random
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# **Data preprocessing for Coconut Leaf Pest Identification :**
# This code performs the preprocessing of data required for training a model to identify pests in coconut leaves. It defines the directories for the training, validation, and test sets, splits the images into these sets based on a given ratio, and moves the images into the appropriate directories. The code shuffles the images in each class folder to ensure the randomness of the splits. Finally, it copies the image files from the source directory to the respective destination directories.
data_dir = "/kaggle/input/coconut-leaf-dataset-for-pest-identification/archive"
saved_data_dir = "/kaggle/working/"
train_dir = os.path.join(saved_data_dir, "train")
validation_dir = os.path.join(saved_data_dir, "validation")
test_dir = os.path.join(saved_data_dir, "test")
os.makedirs(train_dir, exist_ok=True)
os.makedirs(validation_dir, exist_ok=True)
os.makedirs(test_dir, exist_ok=True)
train_ratio = 0.7
validation_ratio = 0.15
test_ratio = 0.15
for class_name in [
"CCI_Caterpillars",
"CCI_Leaflets",
"WCLWD_DryingofLeaflets",
"WCLWD_Flaccidity",
"WCLWD_Yellowing",
]:
class_dir = os.path.join(data_dir, class_name)
files = os.listdir(class_dir)
random.shuffle(files)
train_split_idx = int(train_ratio * len(files))
validation_split_idx = int((train_ratio + validation_ratio) * len(files))
train_files = files[:train_split_idx]
validation_files = files[train_split_idx:validation_split_idx]
test_files = files[validation_split_idx:]
for filename in train_files:
src_path = os.path.join(class_dir, filename)
dst_path = os.path.join(train_dir, class_name, filename)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
shutil.copy(src_path, dst_path)
for filename in validation_files:
src_path = os.path.join(class_dir, filename)
dst_path = os.path.join(validation_dir, class_name, filename)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
shutil.copy(src_path, dst_path)
for filename in test_files:
src_path = os.path.join(class_dir, filename)
dst_path = os.path.join(test_dir, class_name, filename)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
shutil.copy(src_path, dst_path)
# **importing data :**
# Define the paths to the datasets
train_dir = "/kaggle/working/train"
validation_dir = "/kaggle/working/validation"
test_dir = "/kaggle/working/test"
# Define the input image dimensions
img_height = 224
img_width = 224
# Define the number of classes
num_classes = 5
# **Instantiate data generators for training, validation, and test sets :**
train_datagen = ImageDataGenerator(
rescale=1.0 / 255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode="nearest",
)
validation_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_height, img_width),
batch_size=28,
class_mode="categorical",
)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(img_height, img_width),
batch_size=32,
class_mode="categorical",
)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_height, img_width),
batch_size=32,
class_mode="categorical",
)
# **Define the model architecture :**
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(
32, (3, 3), activation="relu", input_shape=(img_height, img_width, 3)
),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(128, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(256, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(256, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(num_classes, activation="softmax"),
]
)
# **Compile the model :**
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# **Train the model :**
history = model.fit(train_generator, epochs=10, validation_data=validation_generator)
# **Evaluate the model on the test set :**
test_loss, test_acc = model.evaluate(test_generator)
print("Test accuracy:", test_acc)
| false | 0 | 1,511 | 3 | 1,612 | 1,511 |
||
129335135
|
<jupyter_start><jupyter_text>Flickr8k-Images-Captions
### Dataset
A small image captioning dataset that is perfect to get started in image captioning. I have also made a video on building an image captioning model in PyTorch where we use this dataset that you could check out: https://youtu.be/y2BaTt1fxJU
Kaggle dataset identifier: flickr8kimagescaptions
<jupyter_script># # **Image Captioning**
# Image captioning is a computer vision and natural language processing task that involves generating a textual description of the content in an image. It combines techniques from computer vision, such as object detection and scene understanding, with natural language processing to create human-like descriptions of visual content.
# The goal of image captioning is to create a system that can accurately describe an image in a way that is both concise and informative.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import cv2 as cv2
import matplotlib.pyplot as plt
import tensorflow as tf
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# **Look into the data**
# First we need to understand the structure of the dataset.
# The Dataset used in this project is 'flickr8kimagescaptions'. It is a benchmark collection for sentence-based image description consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. … The images were chosen from six different Flickr groups, and tend not to contain any well-known people or locations, but were manually selected to depict a variety of scenes and situations.
# The dataset was uploaded to Kaggle server so they can be imported to Kaggle environment seamlessly.
# >**Folders**
# The dataset contains one folder and text file. The folder contains 8000 colored images. The captions.txt file contains all the image captioning for each images. Looking into the structure of the textfile we can see the text file content as below
# * *image,caption
# * 1000268201_693b08cb0e.jpg,A child in a pink dress is climbing up a set of stairs in an entry way .
# * 1000268201_693b08cb0e.jpg,A girl going into a wooden building .
# * 1000268201_693b08cb0e.jpg,A little girl climbing into a wooden playhouse .
# * 1000268201_693b08cb0e.jpg,A little girl climbing the stairs to her playhouse .
# * 1000268201_693b08cb0e.jpg,A little girl in a pink dress going into a wooden cabin .
# * 1001773457_577c3a7d70.jpg,A black dog and a spotted dog are fighting
# * 1001773457_577c3a7d70.jpg,A black dog and a tri-colored dog playing with each other on the road .
# * 1001773457_577c3a7d70.jpg,A black dog and a white dog with brown spots are staring at each other in the street .
# * 1001773457_577c3a7d70.jpg,Two dogs of different breeds looking at each other on the road .
# * 1001773457_577c3a7d70.jpg,Two dogs on pavement moving toward each other .
# * 1002674143_1b742ab4b8.jpg,A little girl covered in paint sits in front of a painted rainbow with her hands in a bowl .
# * 1002674143_1b742ab4b8.jpg,A little girl is sitting in front of a large painted rainbow .
# * 1002674143_1b742ab4b8.jpg,A small girl in the grass plays with fingerpaints in front of a white canvas with a rainbow on it .
# * 1002674143_1b742ab4b8.jpg,There is a girl with pigtails sitting in front of a rainbow painting .
# * 1002674143_1b742ab4b8.jpg,Young girl with pigtails painting outside in the grass .
# * 1003163366_44323f5815.jpg,A man lays on a bench while his dog sits by him .*
# It can be observed that the image name and captions are seperated by the coma. The sentences seems to be well formated as all of them started with uppercase character and ended with a fullstop. During data pre-processing, I will clean the captioning by removing all punctuations, numbers and converting them into lowercase.
# load the image caption
data = pd.read_csv("/kaggle/input/flickr8kimagescaptions/flickr8k/captions.txt")
data.head()
# **Import image file name**
# load the image information
from tensorflow.keras.preprocessing.image import load_img, img_to_array
def load_data():
import glob
image_path = "images"
file_name = []
file_path = os.path.join(
"/kaggle/input/flickr8kimagescaptions/flickr8k/images", "*"
)
for filename in sorted(glob.glob(file_path)):
file_name.append(filename)
file_name = np.asarray(file_name)
return file_name
def readImage(path, img_size=224):
img = load_img(
path,
color_mode="rgb",
target_size=(img_size, img_size),
interpolation="bilinear",
keep_aspect_ratio=True,
)
img = img_to_array(img)
img = img / 255.0
return img
# call the load_data function and test if they work
file_name = load_data()
fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(10, 10))
axs = axs.flatten()
for i in range(6):
img = readImage(file_name[i], 224)
axs[i].imshow(img)
# Display image - code from https://www.kaggle.com/code/quadeer15sh/flickr8k-image-captioning-using-cnns-lstms
def display_images(temp_df):
from textwrap import wrap
temp_df = temp_df.reset_index(drop=True)
plt.figure(figsize=(20, 20))
n = 0
for i in range(15):
n += 1
plt.subplot(5, 5, n)
plt.subplots_adjust(hspace=0.7, wspace=0.3)
image = readImage(
f"/kaggle/input/flickr8kimagescaptions/flickr8k/images/{temp_df.image[i]}"
)
plt.imshow(image)
plt.title("\n".join(wrap(temp_df.caption[i], 20)))
plt.axis("off")
display_images(data.sample(15))
# **Caption text pre-processing**
def text_preprocessing(data):
data["caption"] = data["caption"].apply(
lambda x: x.lower()
) # convert sentences into lowercase
data["caption"] = data["caption"].apply(
lambda x: x.replace("[^A-Za-z]", "")
) # remove all character that is not a-z (remove punctation and number character)
data["caption"] = data["caption"].apply(
lambda x: x.replace(" is ", "").replace(" are ", "")
) # r emove 'is' and 'are'
data["caption"] = data["caption"].apply(
lambda x: " ".join([word for word in x.split() if len(word) > 1])
) # remove single character
data["caption"] = "startseq " + data["caption"] + " endseq"
return data
data = text_preprocessing(data)
captions = data["caption"].tolist()
captions[:10]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/335/129335135.ipynb
|
flickr8kimagescaptions
|
aladdinpersson
|
[{"Id": 129335135, "ScriptId": 37363164, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3990085, "CreationDate": "05/12/2023 22:26:47", "VersionNumber": 1.0, "Title": "image captioning project", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 121.0, "LinesInsertedFromPrevious": 121.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185283485, "KernelVersionId": 129335135, "SourceDatasetVersionId": 1328792}]
|
[{"Id": 1328792, "DatasetId": 771078, "DatasourceVersionId": 1361085, "CreatorUserId": 2085560, "LicenseName": "Unknown", "CreationDate": "07/12/2020 09:20:01", "VersionNumber": 1.0, "Title": "Flickr8k-Images-Captions", "Slug": "flickr8kimagescaptions", "Subtitle": "Clean version of Flickr8k with images and their corresponding captions in txt", "Description": "### Dataset\n\nA small image captioning dataset that is perfect to get started in image captioning. I have also made a video on building an image captioning model in PyTorch where we use this dataset that you could check out: https://youtu.be/y2BaTt1fxJU", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 771078, "CreatorUserId": 2085560, "OwnerUserId": 2085560.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1328792.0, "CurrentDatasourceVersionId": 1361085.0, "ForumId": 786053, "Type": 2, "CreationDate": "07/12/2020 09:20:01", "LastActivityDate": "07/12/2020", "TotalViews": 16465, "TotalDownloads": 3311, "TotalVotes": 52, "TotalKernels": 8}]
|
[{"Id": 2085560, "UserName": "aladdinpersson", "DisplayName": "Aladdin Persson", "RegisterDate": "07/20/2018", "PerformanceTier": 2}]
|
# # **Image Captioning**
# Image captioning is a computer vision and natural language processing task that involves generating a textual description of the content in an image. It combines techniques from computer vision, such as object detection and scene understanding, with natural language processing to create human-like descriptions of visual content.
# The goal of image captioning is to create a system that can accurately describe an image in a way that is both concise and informative.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import cv2 as cv2
import matplotlib.pyplot as plt
import tensorflow as tf
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# **Look into the data**
# First we need to understand the structure of the dataset.
# The Dataset used in this project is 'flickr8kimagescaptions'. It is a benchmark collection for sentence-based image description consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. … The images were chosen from six different Flickr groups, and tend not to contain any well-known people or locations, but were manually selected to depict a variety of scenes and situations.
# The dataset was uploaded to Kaggle server so they can be imported to Kaggle environment seamlessly.
# >**Folders**
# The dataset contains one folder and text file. The folder contains 8000 colored images. The captions.txt file contains all the image captioning for each images. Looking into the structure of the textfile we can see the text file content as below
# * *image,caption
# * 1000268201_693b08cb0e.jpg,A child in a pink dress is climbing up a set of stairs in an entry way .
# * 1000268201_693b08cb0e.jpg,A girl going into a wooden building .
# * 1000268201_693b08cb0e.jpg,A little girl climbing into a wooden playhouse .
# * 1000268201_693b08cb0e.jpg,A little girl climbing the stairs to her playhouse .
# * 1000268201_693b08cb0e.jpg,A little girl in a pink dress going into a wooden cabin .
# * 1001773457_577c3a7d70.jpg,A black dog and a spotted dog are fighting
# * 1001773457_577c3a7d70.jpg,A black dog and a tri-colored dog playing with each other on the road .
# * 1001773457_577c3a7d70.jpg,A black dog and a white dog with brown spots are staring at each other in the street .
# * 1001773457_577c3a7d70.jpg,Two dogs of different breeds looking at each other on the road .
# * 1001773457_577c3a7d70.jpg,Two dogs on pavement moving toward each other .
# * 1002674143_1b742ab4b8.jpg,A little girl covered in paint sits in front of a painted rainbow with her hands in a bowl .
# * 1002674143_1b742ab4b8.jpg,A little girl is sitting in front of a large painted rainbow .
# * 1002674143_1b742ab4b8.jpg,A small girl in the grass plays with fingerpaints in front of a white canvas with a rainbow on it .
# * 1002674143_1b742ab4b8.jpg,There is a girl with pigtails sitting in front of a rainbow painting .
# * 1002674143_1b742ab4b8.jpg,Young girl with pigtails painting outside in the grass .
# * 1003163366_44323f5815.jpg,A man lays on a bench while his dog sits by him .*
# It can be observed that the image name and captions are seperated by the coma. The sentences seems to be well formated as all of them started with uppercase character and ended with a fullstop. During data pre-processing, I will clean the captioning by removing all punctuations, numbers and converting them into lowercase.
# load the image caption
data = pd.read_csv("/kaggle/input/flickr8kimagescaptions/flickr8k/captions.txt")
data.head()
# **Import image file name**
# load the image information
from tensorflow.keras.preprocessing.image import load_img, img_to_array
def load_data():
import glob
image_path = "images"
file_name = []
file_path = os.path.join(
"/kaggle/input/flickr8kimagescaptions/flickr8k/images", "*"
)
for filename in sorted(glob.glob(file_path)):
file_name.append(filename)
file_name = np.asarray(file_name)
return file_name
def readImage(path, img_size=224):
img = load_img(
path,
color_mode="rgb",
target_size=(img_size, img_size),
interpolation="bilinear",
keep_aspect_ratio=True,
)
img = img_to_array(img)
img = img / 255.0
return img
# call the load_data function and test if they work
file_name = load_data()
fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(10, 10))
axs = axs.flatten()
for i in range(6):
img = readImage(file_name[i], 224)
axs[i].imshow(img)
# Display image - code from https://www.kaggle.com/code/quadeer15sh/flickr8k-image-captioning-using-cnns-lstms
def display_images(temp_df):
from textwrap import wrap
temp_df = temp_df.reset_index(drop=True)
plt.figure(figsize=(20, 20))
n = 0
for i in range(15):
n += 1
plt.subplot(5, 5, n)
plt.subplots_adjust(hspace=0.7, wspace=0.3)
image = readImage(
f"/kaggle/input/flickr8kimagescaptions/flickr8k/images/{temp_df.image[i]}"
)
plt.imshow(image)
plt.title("\n".join(wrap(temp_df.caption[i], 20)))
plt.axis("off")
display_images(data.sample(15))
# **Caption text pre-processing**
def text_preprocessing(data):
data["caption"] = data["caption"].apply(
lambda x: x.lower()
) # convert sentences into lowercase
data["caption"] = data["caption"].apply(
lambda x: x.replace("[^A-Za-z]", "")
) # remove all character that is not a-z (remove punctation and number character)
data["caption"] = data["caption"].apply(
lambda x: x.replace(" is ", "").replace(" are ", "")
) # r emove 'is' and 'are'
data["caption"] = data["caption"].apply(
lambda x: " ".join([word for word in x.split() if len(word) > 1])
) # remove single character
data["caption"] = "startseq " + data["caption"] + " endseq"
return data
data = text_preprocessing(data)
captions = data["caption"].tolist()
captions[:10]
| false | 0 | 2,111 | 0 | 2,202 | 2,111 |
||
129335122
|
<jupyter_start><jupyter_text>Cyclistic_Bike_Share_Apr_22-Mar_23
Kaggle dataset identifier: cyclistic-bike-share-apr-22-mar-23
<jupyter_script># "Tidyverse package is being installed."
install.packages("tidyverse")
library(tidyverse)
# "Directory containing the 12 month data is first selected and then the Cyclists data for
# each month is uploaded separately."
setwd("/kaggle/input/google-casestudy-1-2022-2023")
year2022_03 < -read_csv("202202-divvy-tripdata.csv")
year2022_04 < -read_csv("202203-divvy-tripdata.csv")
year2022_05 < -read_csv("202204-divvy-tripdata.csv")
year2022_06 < -read_csv("202205-divvy-tripdata.csv")
year2022_07 < -read_csv("202206-divvy-tripdata.csv")
year2022_08 < -read_csv("202207-divvy-tripdata.csv")
year2022_09 < -read_csv("202208-divvy-tripdata.csv")
year2022_10 < -read_csv("202209-divvy-publictripdata.csv")
year2022_11 < -read_csv("202210-divvy-tripdata.csv")
year2022_12 < -read_csv("202211-divvy-tripdata.csv")
year2023_01 < -read_csv("202212-divvy-tripdata.csv")
setwd("/kaggle/input/cyclistic-bike-share-apr-22-mar-23/New folder")
year2023_02 < -read_csv("Chic_bike_Feb_23.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/335/129335122.ipynb
|
cyclistic-bike-share-apr-22-mar-23
|
mariusborel
|
[{"Id": 129335122, "ScriptId": 38451717, "ParentScriptVersionId": NaN, "ScriptLanguageId": 12, "AuthorUserId": 12937034, "CreationDate": "05/12/2023 22:26:30", "VersionNumber": 2.0, "Title": "notebook3b433deabd", "EvaluationDate": "05/12/2023", "IsChange": false, "TotalLines": 23.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 23.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185283472, "KernelVersionId": 129335122, "SourceDatasetVersionId": 5656018}, {"Id": 185283471, "KernelVersionId": 129335122, "SourceDatasetVersionId": 4993619}]
|
[{"Id": 5656018, "DatasetId": 3250899, "DatasourceVersionId": 5731404, "CreatorUserId": 10060125, "LicenseName": "Unknown", "CreationDate": "05/10/2023 14:15:11", "VersionNumber": 1.0, "Title": "Cyclistic_Bike_Share_Apr_22-Mar_23", "Slug": "cyclistic-bike-share-apr-22-mar-23", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3250899, "CreatorUserId": 10060125, "OwnerUserId": 10060125.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5656018.0, "CurrentDatasourceVersionId": 5731404.0, "ForumId": 3316274, "Type": 2, "CreationDate": "05/10/2023 14:15:11", "LastActivityDate": "05/10/2023", "TotalViews": 540, "TotalDownloads": 17, "TotalVotes": 5, "TotalKernels": 18}]
|
[{"Id": 10060125, "UserName": "mariusborel", "DisplayName": "Marius Borel", "RegisterDate": "03/27/2022", "PerformanceTier": 0}]
|
# "Tidyverse package is being installed."
install.packages("tidyverse")
library(tidyverse)
# "Directory containing the 12 month data is first selected and then the Cyclists data for
# each month is uploaded separately."
setwd("/kaggle/input/google-casestudy-1-2022-2023")
year2022_03 < -read_csv("202202-divvy-tripdata.csv")
year2022_04 < -read_csv("202203-divvy-tripdata.csv")
year2022_05 < -read_csv("202204-divvy-tripdata.csv")
year2022_06 < -read_csv("202205-divvy-tripdata.csv")
year2022_07 < -read_csv("202206-divvy-tripdata.csv")
year2022_08 < -read_csv("202207-divvy-tripdata.csv")
year2022_09 < -read_csv("202208-divvy-tripdata.csv")
year2022_10 < -read_csv("202209-divvy-publictripdata.csv")
year2022_11 < -read_csv("202210-divvy-tripdata.csv")
year2022_12 < -read_csv("202211-divvy-tripdata.csv")
year2023_01 < -read_csv("202212-divvy-tripdata.csv")
setwd("/kaggle/input/cyclistic-bike-share-apr-22-mar-23/New folder")
year2023_02 < -read_csv("Chic_bike_Feb_23.csv")
| false | 0 | 473 | 0 | 523 | 473 |
||
129335440
|
# # ICR Challenge - Identifying Age-Related Conditions 👨⚕️👴👵
# ## Table of contents
# 1. [Introduction](#Introduction)
# 2. [Load libraries](#Load-libraries)
# 3. [Data set](#Data-set)
# - [Load data set](#Load-data-set)
# - [Data set description](#Data-set-description)
# - [Data statistics](#Data-statistics)
# - [Check for missing data](#Check-for-missing-data)
# 4. [Exploratory Data Analysis](#Exploratory-Data-Analysis)
# - [Distribution of age-related conditions](#Distribution-of-age-related-conditions)
# - [Distribution of type of age-related condition](#Distribution-of-type-of-age-related-condition)
# - [Time distribution of data collection](#Time-distribution-of-data-collection)
# - [Experimental Characteristics](#Experimental-Characteristics)
# - [Health Characteristics](#Health-Characteristics)
# - [Feature correlation](#Feature-correlation)
# - [Data cleaning](#Data-cleaning)
# 5. [Model Training](#Model-Training)
# 6. [Submission](#Submission)
# # Introduction
# The goal of this competition is to predict if a person has or has not been diagnosed with one of three medical conditions (a binary classification problem), using various measurements of health characteristics.
# # Load Libraries
import warnings
warnings.simplefilter("ignore")
import os
import datetime
from pathlib import Path
import numpy as np
import pandas as pd
pd.set_option("display.max_rows", None)
import plotly
import plotly.io as pio
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.figure_factory as ff
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import (
train_test_split,
KFold,
StratifiedKFold,
RepeatedKFold,
RepeatedStratifiedKFold,
)
from xgboost import XGBClassifier
import optuna
from tqdm.notebook import tqdm
pd.set_option("display.max_columns", None)
pd.set_option("display.max_colwidth", None)
plotly.offline.init_notebook_mode()
class color:
PURPLE = "\033[95m"
CYAN = "\033[96m"
DARKCYAN = "\033[36m"
BLUE = "\033[94m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
BOLD = "\033[1m"
END = "\033[0m"
pio.templates.default = "plotly_white"
palette = px.colors.sequential.Plasma
print(f"{color.BOLD}Color palette:{color.END}\n")
sb.color_palette(palette)
# # Data set
# ## Data set description
# The competition data comprises over fifty anonymized health characteristics linked to three age-related conditions.
# ### Training set
# - __Id:__ Unique identifier for each observation.
# - __AB-GL:__ Fifty-six anonymized health characteristics. All are numeric except for EJ, which is categorical.
# - __Class:__ A binary target. 1 indicates the subject has been diagnosed with one of the three conditions, 0 indicates they have not.
# ### Test set
# - Our goal is to predict the probability that a subject in this set belongs to each of the two classes.
# ### greeks.csv - supplemental metadata, only available for the training set.
# - __Alpha:__ Identifies the type of age-related condition, if present.
# - __A:__ No age-related condition. Corresponds to class 0.
# - __B, D, G:__ The three age-related conditions. Correspond to class 1.
# - __Beta, Gamma, Delta:__ Three experimental characteristics.
# - __Epsilon:__ The date the data for this subject was collected. Note that all of the data in the test set was collected after the training set was collected.
# __💡 We can model relationships between the three diseases separately instead of computing the joint probability of having any one of the three conditions.__
# ## Load data set
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
data_dir = Path("/kaggle/input/icr-identify-age-related-conditions/")
train_df = pd.read_csv(data_dir / "train.csv", index_col=0)
test_df = pd.read_csv(data_dir / "test.csv", index_col=0)
supplemental_df = pd.read_csv(data_dir / "greeks.csv", index_col=0)
# ## Data statistics
train_df.shape, test_df.shape, supplemental_df.shape
train_df.head()
test_df
# __Note: When the submission is scored, this example test data will be replaced with the full test set. There are about 400 rows in the full test set.__
supplemental_df.head()
supplemental_df["Epsilon"] = supplemental_df["Epsilon"].replace("Unknown", pd.NaT)
train_df = pd.concat([train_df, supplemental_df], axis=1)
# ## Check for missing data
train_df.isnull().sum()
# # Exploratory Data Analysis
target = "Class"
# ## Distribution of age-related conditions
fig = px.histogram(
train_df, x=target, color_discrete_sequence=palette[:1], text_auto=True
)
fig.update_layout(
showlegend=False,
xaxis=dict(
categoryorder="category ascending",
title="Diagnosed",
tickfont=dict(size=16),
titlefont=dict(size=18),
),
yaxis=dict(
title="Number of patients", tickfont=dict(size=16), titlefont=dict(size=18)
),
)
fig.update_xaxes(type="category")
fig.show()
# ## Distribution of type of age-related condition
fig = px.histogram(
train_df.loc[train_df[target] == 1],
x="Alpha",
color="Alpha",
color_discrete_sequence=palette[:3],
text_auto=True,
)
fig.update_layout(
showlegend=False,
xaxis=dict(categoryorder="category ascending"),
)
fig.show()
# ## Time distribution of data collection
fig = px.histogram(
train_df,
y=target,
x="Epsilon",
color=target,
color_discrete_sequence=[palette[5], palette[0]],
histfunc="count",
text_auto=True,
)
fig.update_layout(
showlegend=True,
xaxis=dict(categoryorder="category ascending"),
)
fig.show()
# ## Experimental Characteristics
# #### ⚠️ These features are only available for training data.
experimental_characteristics = ["Alpha", "Beta", "Gamma", "Delta", "Epsilon"]
for c in ["Beta", "Gamma", "Delta"]:
fig = px.histogram(
train_df,
x=c,
color=target,
barmode="group",
color_discrete_sequence=[palette[7], palette[0]],
text_auto=True,
)
fig.update_layout(
showlegend=True,
xaxis=dict(
categoryorder="category ascending",
tickfont=dict(size=16),
titlefont=dict(size=18),
),
yaxis=dict(
title="Number of patients", tickfont=dict(size=16), titlefont=dict(size=18)
),
)
fig.show()
# ## Health Characteristics
train_df = train_df.rename(columns={c: c.rstrip() for c in train_df.columns})
test_df = test_df.rename(columns={c: c.rstrip() for c in test_df.columns})
health_characteristics_columns = train_df.drop(
columns=[target] + experimental_characteristics
).columns.tolist()
print(f"Number of health characteristics: {len(health_characteristics_columns)}")
# ## Feature correlation
corr_df = train_df[health_characteristics_columns].corr()
fig = px.imshow(corr_df, text_auto=True, color_continuous_scale=palette)
fig.update_traces(
hovertemplate="Feature 1: %{x} <br>Feature 2: %{y} <br> Correlation: %{z}",
name="",
showlegend=False,
texttemplate="%{z:.3f}",
)
fig.show()
fig = make_subplots(28, 2, subplot_titles=health_characteristics_columns)
for i, c in enumerate(health_characteristics_columns):
if c == "EJ":
for c in train_df["EJ"].unique():
subset_df = train_df.loc[train_df["EJ"] == c]
fig.add_trace(
go.Histogram(x=subset_df["Alpha"], y=subset_df["EJ"]),
row=1 + i // 2,
col=1 + i % 2,
)
else:
fig.add_trace(
go.Box(x=train_df["Alpha"], y=train_df[c]), row=1 + i // 2, col=1 + i % 2
)
fig.update_layout(
height=3200, showlegend=False, xaxis=dict(categoryorder="category ascending")
)
fig.show()
# ## Data cleaning
"""Fill missing values."""
for c in health_characteristics_columns:
if c == "EJ":
m = train_df["EJ"].mode()
else:
m = train_df[c].median()
train_df[c] = train_df[c].fillna(m)
test_df[c] = test_df[c].fillna(m)
"""Encode categorical features."""
df = pd.concat([train_df, test_df])
df = pd.concat([df.drop(columns="EJ"), pd.get_dummies(df["EJ"])], axis=1)
train_df = df.iloc[: len(train_df)]
test_df = df.iloc[len(train_df) :]
# # Model Training
features_to_drop = ["BQ", "EL"] + experimental_characteristics
X_train, Y_train = train_df.drop(columns=[target] + features_to_drop), train_df[target]
X_test = test_df.copy().drop(columns=[target] + features_to_drop)
X_train.shape, Y_train.shape, X_test.shape
scaler = RobustScaler()
X_train.loc[:] = scaler.fit_transform(X_train)
X_test.loc[:] = scaler.transform(X_test)
def objective_xgb(trial):
params = {
"objective": "binary:logistic",
"tree_method": trial.suggest_categorical("tree_method", ["gpu_hist"]),
"reg_lambda": trial.suggest_float("reg_lambda", 1e-3, 1e2, log=True),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.5, 1.0, step=0.1),
"colsample_bylevel": trial.suggest_float(
"colsample_bylevel", 0.5, 1.0, step=0.1
),
"subsample": trial.suggest_float("subsample", 0.5, 1.0, step=0.1),
"learning_rate": trial.suggest_float("learning_rate", 1e-2, 1e0, log=True),
"n_estimators": trial.suggest_int("n_estimators", 50, 200, step=10),
"max_depth": trial.suggest_int("max_depth", 4, 10, step=2),
"grow_policy": trial.suggest_categorical(
"grow_policy", ["depthwise", "lossguide"]
),
}
kf = StratifiedKFold(n_splits=5, random_state=42, shuffle=True)
val_split_loss = []
for train_idx, val_idx in kf.split(X_train, Y_train):
X_train_split, X_val_split = X_train.iloc[train_idx], X_train.iloc[val_idx]
Y_train_split, Y_val_split = Y_train.iloc[train_idx], Y_train.iloc[val_idx]
estimator = XGBClassifier(**params)
estimator.fit(
X_train_split,
Y_train_split,
eval_set=[(X_val_split, Y_val_split)],
early_stopping_rounds=5,
verbose=0,
)
Y_pred_val = pd.Series(
estimator.predict_proba(X_val_split)[:, 1], index=X_val_split.index
)
loss = balanced_logarithmic_loss(Y_val_split, Y_pred_val)
val_split_loss.append(loss)
val_log_loss = np.mean(val_split_loss)
return val_log_loss
def balanced_logarithmic_loss(y_true, y_pred):
"""Takes true binary labels and probability of class 1, and returns balanced log loss."""
y_true_expanded = np.zeros((len(y_true), 2))
y_true_expanded[np.arange(len(y_true)), y_true.astype(int)] = 1.0
y_pred_expanded = np.zeros((len(y_true), 2))
y_pred_expanded[:, 1] = y_pred
y_pred_expanded[:, 0] = 1 - y_pred
class_weights = np.sum(y_true_expanded, axis=0) / len(y_true_expanded)
y_pred = np.maximum(np.minimum(y_pred, 1 - 1e-15), 1e-15)
loss = -np.sum(y_true_expanded * np.log(y_pred_expanded), axis=0)
balanced_loss = np.sum(loss * class_weights)
return balanced_loss
study = optuna.create_study(direction="minimize")
study.optimize(objective_xgb, n_trials=20, show_progress_bar=True)
print("Number of finished trials:", len(study.trials))
study.trials_dataframe().sort_values(by="value").head()
# ### Best hyper-parameters
best_params = study.best_trial.params
best_params
Y_pred_test_xgb = []
val_split_loss = []
feature_importances = []
n_repeats = 1
n_splits = 5
kf = RepeatedStratifiedKFold(n_splits=n_splits, n_repeats=n_repeats, random_state=1)
for i, (train_idx, val_idx) in enumerate(kf.split(X_train)):
X_train_split, X_val_split = X_train.iloc[train_idx], X_train.iloc[val_idx]
Y_train_split, Y_val_split = Y_train.iloc[train_idx], Y_train.iloc[val_idx]
estimator = XGBClassifier(**best_params, eval_metric=balanced_logarithmic_loss)
estimator.fit(
X_train_split,
Y_train_split,
eval_set=[(X_val_split, Y_val_split)],
early_stopping_rounds=3,
verbose=0,
)
Y_pred_val = pd.Series(
estimator.predict_proba(X_val_split)[:, 1], index=X_val_split.index
)
loss = balanced_logarithmic_loss(Y_val_split, Y_pred_val)
val_split_loss.append(loss)
Y_pred_test_xgb.append(estimator.predict_proba(X_test))
feature_importances.append(estimator.feature_importances_)
print(f"Validation set loss for fold {i}: {loss:.4f}")
feature_importances = np.mean(feature_importances, axis=0)
val_loss = np.mean(val_split_loss)
Y_pred_test_xgb_1 = np.mean(Y_pred_test_xgb, axis=0)
# ## Evaluation on Validation set
print(
f"{color.BOLD}Loss for validation set for final XGBoost model: {val_loss:.3f}{color.BOLD}"
)
feature_importances = (
pd.Series(data=feature_importances, index=X_train.columns.tolist())
.sort_values(ascending=False)
.head(15)
)
fig = px.bar(
feature_importances.reset_index(),
y="index",
x=0,
height=800,
color=0,
text_auto=True,
)
fig.update_layout(
yaxis_automargin=True,
xaxis_title="Feature importance",
yaxis_title="Feature",
coloraxis_showscale=False,
)
fig.update_traces(textposition="outside", texttemplate="%{x:.3f}")
fig.show()
# # Submission
Y_pred_test = pd.DataFrame(
data=Y_pred_test_xgb_1, index=X_test.index, columns=["class_0", "class_1"]
)
Y_pred_test.sample(5)
submission_df = Y_pred_test.reset_index()
submission_df.head()
submission_df.to_csv("submission.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/335/129335440.ipynb
| null | null |
[{"Id": 129335440, "ScriptId": 38454451, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1382879, "CreationDate": "05/12/2023 22:34:12", "VersionNumber": 1.0, "Title": "Plotly EDA \ud83c\udfa8 | XGBoost | Optuna tuning", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 354.0, "LinesInsertedFromPrevious": 38.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 316.0, "LinesInsertedFromFork": 38.0, "LinesDeletedFromFork": 46.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 316.0, "TotalVotes": 0}]
| null | null | null | null |
# # ICR Challenge - Identifying Age-Related Conditions 👨⚕️👴👵
# ## Table of contents
# 1. [Introduction](#Introduction)
# 2. [Load libraries](#Load-libraries)
# 3. [Data set](#Data-set)
# - [Load data set](#Load-data-set)
# - [Data set description](#Data-set-description)
# - [Data statistics](#Data-statistics)
# - [Check for missing data](#Check-for-missing-data)
# 4. [Exploratory Data Analysis](#Exploratory-Data-Analysis)
# - [Distribution of age-related conditions](#Distribution-of-age-related-conditions)
# - [Distribution of type of age-related condition](#Distribution-of-type-of-age-related-condition)
# - [Time distribution of data collection](#Time-distribution-of-data-collection)
# - [Experimental Characteristics](#Experimental-Characteristics)
# - [Health Characteristics](#Health-Characteristics)
# - [Feature correlation](#Feature-correlation)
# - [Data cleaning](#Data-cleaning)
# 5. [Model Training](#Model-Training)
# 6. [Submission](#Submission)
# # Introduction
# The goal of this competition is to predict if a person has or has not been diagnosed with one of three medical conditions (a binary classification problem), using various measurements of health characteristics.
# # Load Libraries
import warnings
warnings.simplefilter("ignore")
import os
import datetime
from pathlib import Path
import numpy as np
import pandas as pd
pd.set_option("display.max_rows", None)
import plotly
import plotly.io as pio
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.figure_factory as ff
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import (
train_test_split,
KFold,
StratifiedKFold,
RepeatedKFold,
RepeatedStratifiedKFold,
)
from xgboost import XGBClassifier
import optuna
from tqdm.notebook import tqdm
pd.set_option("display.max_columns", None)
pd.set_option("display.max_colwidth", None)
plotly.offline.init_notebook_mode()
class color:
PURPLE = "\033[95m"
CYAN = "\033[96m"
DARKCYAN = "\033[36m"
BLUE = "\033[94m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
BOLD = "\033[1m"
END = "\033[0m"
pio.templates.default = "plotly_white"
palette = px.colors.sequential.Plasma
print(f"{color.BOLD}Color palette:{color.END}\n")
sb.color_palette(palette)
# # Data set
# ## Data set description
# The competition data comprises over fifty anonymized health characteristics linked to three age-related conditions.
# ### Training set
# - __Id:__ Unique identifier for each observation.
# - __AB-GL:__ Fifty-six anonymized health characteristics. All are numeric except for EJ, which is categorical.
# - __Class:__ A binary target. 1 indicates the subject has been diagnosed with one of the three conditions, 0 indicates they have not.
# ### Test set
# - Our goal is to predict the probability that a subject in this set belongs to each of the two classes.
# ### greeks.csv - supplemental metadata, only available for the training set.
# - __Alpha:__ Identifies the type of age-related condition, if present.
# - __A:__ No age-related condition. Corresponds to class 0.
# - __B, D, G:__ The three age-related conditions. Correspond to class 1.
# - __Beta, Gamma, Delta:__ Three experimental characteristics.
# - __Epsilon:__ The date the data for this subject was collected. Note that all of the data in the test set was collected after the training set was collected.
# __💡 We can model relationships between the three diseases separately instead of computing the joint probability of having any one of the three conditions.__
# ## Load data set
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
data_dir = Path("/kaggle/input/icr-identify-age-related-conditions/")
train_df = pd.read_csv(data_dir / "train.csv", index_col=0)
test_df = pd.read_csv(data_dir / "test.csv", index_col=0)
supplemental_df = pd.read_csv(data_dir / "greeks.csv", index_col=0)
# ## Data statistics
train_df.shape, test_df.shape, supplemental_df.shape
train_df.head()
test_df
# __Note: When the submission is scored, this example test data will be replaced with the full test set. There are about 400 rows in the full test set.__
supplemental_df.head()
supplemental_df["Epsilon"] = supplemental_df["Epsilon"].replace("Unknown", pd.NaT)
train_df = pd.concat([train_df, supplemental_df], axis=1)
# ## Check for missing data
train_df.isnull().sum()
# # Exploratory Data Analysis
target = "Class"
# ## Distribution of age-related conditions
fig = px.histogram(
train_df, x=target, color_discrete_sequence=palette[:1], text_auto=True
)
fig.update_layout(
showlegend=False,
xaxis=dict(
categoryorder="category ascending",
title="Diagnosed",
tickfont=dict(size=16),
titlefont=dict(size=18),
),
yaxis=dict(
title="Number of patients", tickfont=dict(size=16), titlefont=dict(size=18)
),
)
fig.update_xaxes(type="category")
fig.show()
# ## Distribution of type of age-related condition
fig = px.histogram(
train_df.loc[train_df[target] == 1],
x="Alpha",
color="Alpha",
color_discrete_sequence=palette[:3],
text_auto=True,
)
fig.update_layout(
showlegend=False,
xaxis=dict(categoryorder="category ascending"),
)
fig.show()
# ## Time distribution of data collection
fig = px.histogram(
train_df,
y=target,
x="Epsilon",
color=target,
color_discrete_sequence=[palette[5], palette[0]],
histfunc="count",
text_auto=True,
)
fig.update_layout(
showlegend=True,
xaxis=dict(categoryorder="category ascending"),
)
fig.show()
# ## Experimental Characteristics
# #### ⚠️ These features are only available for training data.
experimental_characteristics = ["Alpha", "Beta", "Gamma", "Delta", "Epsilon"]
for c in ["Beta", "Gamma", "Delta"]:
fig = px.histogram(
train_df,
x=c,
color=target,
barmode="group",
color_discrete_sequence=[palette[7], palette[0]],
text_auto=True,
)
fig.update_layout(
showlegend=True,
xaxis=dict(
categoryorder="category ascending",
tickfont=dict(size=16),
titlefont=dict(size=18),
),
yaxis=dict(
title="Number of patients", tickfont=dict(size=16), titlefont=dict(size=18)
),
)
fig.show()
# ## Health Characteristics
train_df = train_df.rename(columns={c: c.rstrip() for c in train_df.columns})
test_df = test_df.rename(columns={c: c.rstrip() for c in test_df.columns})
health_characteristics_columns = train_df.drop(
columns=[target] + experimental_characteristics
).columns.tolist()
print(f"Number of health characteristics: {len(health_characteristics_columns)}")
# ## Feature correlation
corr_df = train_df[health_characteristics_columns].corr()
fig = px.imshow(corr_df, text_auto=True, color_continuous_scale=palette)
fig.update_traces(
hovertemplate="Feature 1: %{x} <br>Feature 2: %{y} <br> Correlation: %{z}",
name="",
showlegend=False,
texttemplate="%{z:.3f}",
)
fig.show()
fig = make_subplots(28, 2, subplot_titles=health_characteristics_columns)
for i, c in enumerate(health_characteristics_columns):
if c == "EJ":
for c in train_df["EJ"].unique():
subset_df = train_df.loc[train_df["EJ"] == c]
fig.add_trace(
go.Histogram(x=subset_df["Alpha"], y=subset_df["EJ"]),
row=1 + i // 2,
col=1 + i % 2,
)
else:
fig.add_trace(
go.Box(x=train_df["Alpha"], y=train_df[c]), row=1 + i // 2, col=1 + i % 2
)
fig.update_layout(
height=3200, showlegend=False, xaxis=dict(categoryorder="category ascending")
)
fig.show()
# ## Data cleaning
"""Fill missing values."""
for c in health_characteristics_columns:
if c == "EJ":
m = train_df["EJ"].mode()
else:
m = train_df[c].median()
train_df[c] = train_df[c].fillna(m)
test_df[c] = test_df[c].fillna(m)
"""Encode categorical features."""
df = pd.concat([train_df, test_df])
df = pd.concat([df.drop(columns="EJ"), pd.get_dummies(df["EJ"])], axis=1)
train_df = df.iloc[: len(train_df)]
test_df = df.iloc[len(train_df) :]
# # Model Training
features_to_drop = ["BQ", "EL"] + experimental_characteristics
X_train, Y_train = train_df.drop(columns=[target] + features_to_drop), train_df[target]
X_test = test_df.copy().drop(columns=[target] + features_to_drop)
X_train.shape, Y_train.shape, X_test.shape
scaler = RobustScaler()
X_train.loc[:] = scaler.fit_transform(X_train)
X_test.loc[:] = scaler.transform(X_test)
def objective_xgb(trial):
params = {
"objective": "binary:logistic",
"tree_method": trial.suggest_categorical("tree_method", ["gpu_hist"]),
"reg_lambda": trial.suggest_float("reg_lambda", 1e-3, 1e2, log=True),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.5, 1.0, step=0.1),
"colsample_bylevel": trial.suggest_float(
"colsample_bylevel", 0.5, 1.0, step=0.1
),
"subsample": trial.suggest_float("subsample", 0.5, 1.0, step=0.1),
"learning_rate": trial.suggest_float("learning_rate", 1e-2, 1e0, log=True),
"n_estimators": trial.suggest_int("n_estimators", 50, 200, step=10),
"max_depth": trial.suggest_int("max_depth", 4, 10, step=2),
"grow_policy": trial.suggest_categorical(
"grow_policy", ["depthwise", "lossguide"]
),
}
kf = StratifiedKFold(n_splits=5, random_state=42, shuffle=True)
val_split_loss = []
for train_idx, val_idx in kf.split(X_train, Y_train):
X_train_split, X_val_split = X_train.iloc[train_idx], X_train.iloc[val_idx]
Y_train_split, Y_val_split = Y_train.iloc[train_idx], Y_train.iloc[val_idx]
estimator = XGBClassifier(**params)
estimator.fit(
X_train_split,
Y_train_split,
eval_set=[(X_val_split, Y_val_split)],
early_stopping_rounds=5,
verbose=0,
)
Y_pred_val = pd.Series(
estimator.predict_proba(X_val_split)[:, 1], index=X_val_split.index
)
loss = balanced_logarithmic_loss(Y_val_split, Y_pred_val)
val_split_loss.append(loss)
val_log_loss = np.mean(val_split_loss)
return val_log_loss
def balanced_logarithmic_loss(y_true, y_pred):
"""Takes true binary labels and probability of class 1, and returns balanced log loss."""
y_true_expanded = np.zeros((len(y_true), 2))
y_true_expanded[np.arange(len(y_true)), y_true.astype(int)] = 1.0
y_pred_expanded = np.zeros((len(y_true), 2))
y_pred_expanded[:, 1] = y_pred
y_pred_expanded[:, 0] = 1 - y_pred
class_weights = np.sum(y_true_expanded, axis=0) / len(y_true_expanded)
y_pred = np.maximum(np.minimum(y_pred, 1 - 1e-15), 1e-15)
loss = -np.sum(y_true_expanded * np.log(y_pred_expanded), axis=0)
balanced_loss = np.sum(loss * class_weights)
return balanced_loss
study = optuna.create_study(direction="minimize")
study.optimize(objective_xgb, n_trials=20, show_progress_bar=True)
print("Number of finished trials:", len(study.trials))
study.trials_dataframe().sort_values(by="value").head()
# ### Best hyper-parameters
best_params = study.best_trial.params
best_params
Y_pred_test_xgb = []
val_split_loss = []
feature_importances = []
n_repeats = 1
n_splits = 5
kf = RepeatedStratifiedKFold(n_splits=n_splits, n_repeats=n_repeats, random_state=1)
for i, (train_idx, val_idx) in enumerate(kf.split(X_train)):
X_train_split, X_val_split = X_train.iloc[train_idx], X_train.iloc[val_idx]
Y_train_split, Y_val_split = Y_train.iloc[train_idx], Y_train.iloc[val_idx]
estimator = XGBClassifier(**best_params, eval_metric=balanced_logarithmic_loss)
estimator.fit(
X_train_split,
Y_train_split,
eval_set=[(X_val_split, Y_val_split)],
early_stopping_rounds=3,
verbose=0,
)
Y_pred_val = pd.Series(
estimator.predict_proba(X_val_split)[:, 1], index=X_val_split.index
)
loss = balanced_logarithmic_loss(Y_val_split, Y_pred_val)
val_split_loss.append(loss)
Y_pred_test_xgb.append(estimator.predict_proba(X_test))
feature_importances.append(estimator.feature_importances_)
print(f"Validation set loss for fold {i}: {loss:.4f}")
feature_importances = np.mean(feature_importances, axis=0)
val_loss = np.mean(val_split_loss)
Y_pred_test_xgb_1 = np.mean(Y_pred_test_xgb, axis=0)
# ## Evaluation on Validation set
print(
f"{color.BOLD}Loss for validation set for final XGBoost model: {val_loss:.3f}{color.BOLD}"
)
feature_importances = (
pd.Series(data=feature_importances, index=X_train.columns.tolist())
.sort_values(ascending=False)
.head(15)
)
fig = px.bar(
feature_importances.reset_index(),
y="index",
x=0,
height=800,
color=0,
text_auto=True,
)
fig.update_layout(
yaxis_automargin=True,
xaxis_title="Feature importance",
yaxis_title="Feature",
coloraxis_showscale=False,
)
fig.update_traces(textposition="outside", texttemplate="%{x:.3f}")
fig.show()
# # Submission
Y_pred_test = pd.DataFrame(
data=Y_pred_test_xgb_1, index=X_test.index, columns=["class_0", "class_1"]
)
Y_pred_test.sample(5)
submission_df = Y_pred_test.reset_index()
submission_df.head()
submission_df.to_csv("submission.csv", index=False)
| false | 0 | 4,337 | 0 | 4,337 | 4,337 |
||
129335974
|
<jupyter_start><jupyter_text>Mushroom Classification
### Context
Although this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as "shrooming") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?
### Content
This dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like "leaflets three, let it be'' for Poisonous Oak and Ivy.
- **Time period**: Donated to UCI ML 27 April 1987
### Inspiration
- What types of machine learning models perform best on this dataset?
- Which features are most indicative of a poisonous mushroom?
Kaggle dataset identifier: mushroom-classification
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/mushrooms.csv")
X_train, X_test, y_train, y_test = train_test_split(
df.drop("class", axis=1), df["class"]
)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import f1_score
def transform_labels_to_01(labels, pos_lab):
return [1 if y == pos_lab else 0 for y in labels]
y_train_01 = transform_labels_to_01(y_train, "e")
y_test_01 = transform_labels_to_01(y_test, "e")
one_hot = OneHotEncoder()
X_train_tr = one_hot.fit_transform(X_train)
rnd_clf = RandomForestClassifier()
params = {"n_estimators": [100, 250], "max_leaf_nodes": [20, 30]}
grid_cv = GridSearchCV(rnd_clf, params, verbose=3, cv=3, scoring="f1")
grid_cv.fit(X_train_tr, y_train_01)
best_clf = grid_cv.best_estimator_
full_pipeline = Pipeline([("one_hot", one_hot), ("clf", best_clf)])
y_pred = full_pipeline.predict(X_test)
f1_score(y_test_01, y_pred)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/335/129335974.ipynb
|
mushroom-classification
| null |
[{"Id": 129335974, "ScriptId": 38454440, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15078612, "CreationDate": "05/12/2023 22:46:10", "VersionNumber": 1.0, "Title": "Random Forest method for mushroom classification", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 54.0, "LinesInsertedFromPrevious": 43.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 11.0, "LinesInsertedFromFork": 43.0, "LinesDeletedFromFork": 41.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 11.0, "TotalVotes": 0}]
|
[{"Id": 185284648, "KernelVersionId": 129335974, "SourceDatasetVersionId": 974}]
|
[{"Id": 974, "DatasetId": 478, "DatasourceVersionId": 974, "CreatorUserId": 495305, "LicenseName": "CC0: Public Domain", "CreationDate": "12/01/2016 23:08:00", "VersionNumber": 1.0, "Title": "Mushroom Classification", "Slug": "mushroom-classification", "Subtitle": "Safe to eat or deadly poison?", "Description": "### Context\n\nAlthough this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as \"shrooming\") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?\n\n### Content \n\nThis dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like \"leaflets three, let it be'' for Poisonous Oak and Ivy.\n\n- **Time period**: Donated to UCI ML 27 April 1987\n\n### Inspiration\n\n- What types of machine learning models perform best on this dataset?\n\n- Which features are most indicative of a poisonous mushroom?\n\n### Acknowledgements\n\nThis dataset was originally donated to the UCI Machine Learning repository. You can learn more about past research using the data [here][1]. \n\n#[Start a new kernel][2]\n\n\n [1]: https://archive.ics.uci.edu/ml/datasets/Mushroom\n [2]: https://www.kaggle.com/uciml/mushroom-classification/kernels?modal=true", "VersionNotes": "Initial release", "TotalCompressedBytes": 374003.0, "TotalUncompressedBytes": 374003.0}]
|
[{"Id": 478, "CreatorUserId": 495305, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 974.0, "CurrentDatasourceVersionId": 974.0, "ForumId": 2099, "Type": 2, "CreationDate": "12/01/2016 23:08:00", "LastActivityDate": "02/06/2018", "TotalViews": 873597, "TotalDownloads": 114985, "TotalVotes": 2206, "TotalKernels": 1371}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/mushrooms.csv")
X_train, X_test, y_train, y_test = train_test_split(
df.drop("class", axis=1), df["class"]
)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import f1_score
def transform_labels_to_01(labels, pos_lab):
return [1 if y == pos_lab else 0 for y in labels]
y_train_01 = transform_labels_to_01(y_train, "e")
y_test_01 = transform_labels_to_01(y_test, "e")
one_hot = OneHotEncoder()
X_train_tr = one_hot.fit_transform(X_train)
rnd_clf = RandomForestClassifier()
params = {"n_estimators": [100, 250], "max_leaf_nodes": [20, 30]}
grid_cv = GridSearchCV(rnd_clf, params, verbose=3, cv=3, scoring="f1")
grid_cv.fit(X_train_tr, y_train_01)
best_clf = grid_cv.best_estimator_
full_pipeline = Pipeline([("one_hot", one_hot), ("clf", best_clf)])
y_pred = full_pipeline.predict(X_test)
f1_score(y_test_01, y_pred)
| false | 0 | 570 | 0 | 873 | 570 |
||
129335912
|
<jupyter_start><jupyter_text>Flicktime
Kaggle dataset identifier: flicktime
<jupyter_script>import pandas as pd
import numpy as np
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import json
import gc
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
import pickle
with open("/kaggle/input/flicktime/movielens1.json", "r") as f:
data = json.load(f)
tmdb_data = pd.json_normalize(data)
link = pd.read_csv("/kaggle/input/flicktime/link.csv")
rating = pd.read_csv("/kaggle/input/flicktime/rating.csv")
tmdb_data.drop(
[
"belongs_to_collection",
"belongs_to_collection.backdrop_path",
"belongs_to_collection.poster_path",
"belongs_to_collection.name",
"belongs_to_collection.id",
],
inplace=True,
axis=1,
)
tmdb_data.dropna(inplace=True)
merged_df = tmdb_data.merge(link, left_on="id", right_on="tmdbId")
del tmdb_data
gc.collect()
merged_df.head()
merged_df[merged_df["title"].str.startswith("Thor")]
merged_rating = rating.loc[rating["movieId"].isin(merged_df.movieId.values)]
merged_rating.userId.max()
merged_rating.head()
# try:
# value = merged_rating.loc[(merged_rating['userId'] == 1) & (merged_rating['movieId'] == 100000000), 'rating'].iloc[0]
# except IndexError:
# value = None
# print(value)
# user_df = pd.DataFrame(columns=['email','userId'])
# max( merged_rating[['userID']])
# pickle.dump(user_df, open('user_df.pkl', 'wb'))
# user_df.columns
# user_df.loc[user_df['email'] == email, 'userId'].iloc[0] if not user_df.loc[user_df['email'] == email].empty else None
# userid = df.loc[df['Email'] == email, 'UserID'].iloc[0] if not df.loc[df['Email'] == email].empty else None
# merged_rating.head()
# # Original dictionary with string keys
# original_dict = {'1': 3.5, '2': 4.5, '3': 5}
# # Create a new dictionary with integer keys
# new_dict = {}
# for key, value in original_dict.items():
# new_dict[int(key)] = value
# # Print the new dictionary
# print(new_dict)
# def add_rating(merged_rating, ratingsMap, userId):
# # Create a new dictionary with integer keys
# new_dict = {}
# for key, value in ratingsMap.items():
# new_dict[int(key)] = value
# # Filter the merged_rating dataframe for the specific user and the movies in the rating map
# user_ratings = merged_rating[(merged_rating['userId'] == userId) & (merged_rating['movieId'].isin(ratingsMap.keys()))]
# print(user_ratings)
# # Loop over the movie IDs in the rating map
# for movie_id in ratingsMap.keys():
# # Check if the movie ID is already in the user_ratings dataframe
# if movie_id not in user_ratings['movieId'].values:
# # If not, add the new rating as a new row to the user_ratings dataframe
# new_row = pd.DataFrame({'userId': [userId], 'movieId': [movie_id], 'rating': [ratingsMap[movie_id]], 'timestamp': [pd.Timestamp.now()]})
# user_ratings = user_ratings.append(new_row)
# # print(user_ratings)
# # Update the merged_rating dataframe with the updated user_ratings dataframe
# merged_rating.update(user_ratings)
# print(merged_rating.head())
# add_rating(merged_rating, original_dict, 1)
# Define a TF-IDF vectorizer for the overview field
tfidf = TfidfVectorizer(stop_words="english")
# Compute the TF-IDF matrix for the overviews
tfidf_matrix = tfidf.fit_transform(merged_df["overview"].fillna(""))
# Compute the cosine similarities between movies based on the overviews
cosine_similarities = cosine_similarity(tfidf_matrix)
del tfidf_matrix
gc.collect()
# pickle.dump(cosine_similarities, open('cosine_similarities.pkl', 'wb'))
# pickle.dump(merged_rating, open('merged_rating.pkl', 'wb'))
# pickle.dump(merged_df, open('merged_df.pkl', 'wb'))
# temp = pickle.load(open('/kaggle/working/merged_rating.pkl', 'rb'))
# temp1 = pickle.load(open('/kaggle/working/merged_df.pkl', 'rb'))
# del temp, temp1
# gc.collect()
# Define the number of recommendations to make
num_recommendations = 20
print(1 in merged_rating["userId"].values)
def get_hybrid_recommendations(
user_id, watch_history, collaborative_model, content_model
):
# Get the user's movie history
user_history = merged_rating[merged_rating["userId"] == user_id]["movieId"].unique()
temp = []
for i in watch_history:
temp.extend(merged_df.loc[merged_df["id"] == i]["movieId"].values)
watch_history = np.array(temp)
if len(watch_history) > 0:
print("append")
user_history = np.append(user_history, watch_history)
user_history = np.unique(user_history)
# Make predictions for all unseen movies using the collaborative model
unseen_movies = np.setdiff1d(merged_df["movieId"].unique(), user_history)
test_movie_ids = np.array(unseen_movies)
test_user_ids = np.array(len(unseen_movies) * [user_id])
test_input = [test_user_ids, test_movie_ids]
unseen_ratings = model_nn_84.predict(test_input).flatten()
unseen_indices = np.argsort(unseen_ratings)[::-1][:num_recommendations]
collaborative_recommendations = unseen_movies[unseen_indices]
# Get the top similar movies to the user's history using the content model
content_recommendations = []
for movie_id in user_history:
movie_index = merged_df[merged_df["movieId"] == movie_id].index[0]
similar_indices = cosine_similarities[movie_index].argsort()[::-1][
1 : num_recommendations + 1
]
content_recommendations += list(
merged_df.iloc[similar_indices]["movieId"].values
)
content_recommendations = np.array(content_recommendations)
# Combine the two lists of recommendations
hybrid_recommendations = np.array(
np.union1d(collaborative_recommendations, content_recommendations)
)
hybrid_test_user_ids = np.array(len(hybrid_recommendations) * [user_id])
hybrid_test_input = [hybrid_test_user_ids, hybrid_recommendations]
hybrid_ratings = model_nn_84.predict(hybrid_test_input).flatten()
hybrid_indices = np.argsort(hybrid_ratings)[::-1][:num_recommendations]
hybrid_recommendations = hybrid_recommendations[hybrid_indices]
del (
user_history,
unseen_movies,
unseen_ratings,
unseen_indices,
collaborative_recommendations,
content_recommendations,
hybrid_ratings,
hybrid_indices,
)
gc.collect()
return hybrid_recommendations
model_nn_84 = pickle.load(open("/kaggle/input/flicktime/model_nn_84.pkl", "rb"))
user_id = 1
watch_history = [434, 85, 12506, 16320, 186, 388]
hybrid_recommendations = get_hybrid_recommendations(
user_id, watch_history, model_nn_84, cosine_similarities
)
recommended_movies = merged_df[merged_df["movieId"].isin(hybrid_recommendations)]
recommended_movies[["id", "overview", "title"]]
# 434, 85, 12506, 16320, 186, 388
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/335/129335912.ipynb
|
flicktime
|
jy2040
|
[{"Id": 129335912, "ScriptId": 38372924, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1591561, "CreationDate": "05/12/2023 22:44:46", "VersionNumber": 1.0, "Title": "flicktime_final", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 204.0, "LinesInsertedFromPrevious": 204.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185284558, "KernelVersionId": 129335912, "SourceDatasetVersionId": 5658766}]
|
[{"Id": 5658766, "DatasetId": 3232464, "DatasourceVersionId": 5734182, "CreatorUserId": 1591561, "LicenseName": "Unknown", "CreationDate": "05/10/2023 21:15:49", "VersionNumber": 6.0, "Title": "Flicktime", "Slug": "flicktime", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-05-10", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3232464, "CreatorUserId": 1591561, "OwnerUserId": 1591561.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5658766.0, "CurrentDatasourceVersionId": 5734182.0, "ForumId": 3297613, "Type": 2, "CreationDate": "05/07/2023 01:12:33", "LastActivityDate": "05/07/2023", "TotalViews": 65, "TotalDownloads": 4, "TotalVotes": 0, "TotalKernels": 4}]
|
[{"Id": 1591561, "UserName": "jy2040", "DisplayName": "Jay Bharadva", "RegisterDate": "01/29/2018", "PerformanceTier": 1}]
|
import pandas as pd
import numpy as np
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import json
import gc
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
import pickle
with open("/kaggle/input/flicktime/movielens1.json", "r") as f:
data = json.load(f)
tmdb_data = pd.json_normalize(data)
link = pd.read_csv("/kaggle/input/flicktime/link.csv")
rating = pd.read_csv("/kaggle/input/flicktime/rating.csv")
tmdb_data.drop(
[
"belongs_to_collection",
"belongs_to_collection.backdrop_path",
"belongs_to_collection.poster_path",
"belongs_to_collection.name",
"belongs_to_collection.id",
],
inplace=True,
axis=1,
)
tmdb_data.dropna(inplace=True)
merged_df = tmdb_data.merge(link, left_on="id", right_on="tmdbId")
del tmdb_data
gc.collect()
merged_df.head()
merged_df[merged_df["title"].str.startswith("Thor")]
merged_rating = rating.loc[rating["movieId"].isin(merged_df.movieId.values)]
merged_rating.userId.max()
merged_rating.head()
# try:
# value = merged_rating.loc[(merged_rating['userId'] == 1) & (merged_rating['movieId'] == 100000000), 'rating'].iloc[0]
# except IndexError:
# value = None
# print(value)
# user_df = pd.DataFrame(columns=['email','userId'])
# max( merged_rating[['userID']])
# pickle.dump(user_df, open('user_df.pkl', 'wb'))
# user_df.columns
# user_df.loc[user_df['email'] == email, 'userId'].iloc[0] if not user_df.loc[user_df['email'] == email].empty else None
# userid = df.loc[df['Email'] == email, 'UserID'].iloc[0] if not df.loc[df['Email'] == email].empty else None
# merged_rating.head()
# # Original dictionary with string keys
# original_dict = {'1': 3.5, '2': 4.5, '3': 5}
# # Create a new dictionary with integer keys
# new_dict = {}
# for key, value in original_dict.items():
# new_dict[int(key)] = value
# # Print the new dictionary
# print(new_dict)
# def add_rating(merged_rating, ratingsMap, userId):
# # Create a new dictionary with integer keys
# new_dict = {}
# for key, value in ratingsMap.items():
# new_dict[int(key)] = value
# # Filter the merged_rating dataframe for the specific user and the movies in the rating map
# user_ratings = merged_rating[(merged_rating['userId'] == userId) & (merged_rating['movieId'].isin(ratingsMap.keys()))]
# print(user_ratings)
# # Loop over the movie IDs in the rating map
# for movie_id in ratingsMap.keys():
# # Check if the movie ID is already in the user_ratings dataframe
# if movie_id not in user_ratings['movieId'].values:
# # If not, add the new rating as a new row to the user_ratings dataframe
# new_row = pd.DataFrame({'userId': [userId], 'movieId': [movie_id], 'rating': [ratingsMap[movie_id]], 'timestamp': [pd.Timestamp.now()]})
# user_ratings = user_ratings.append(new_row)
# # print(user_ratings)
# # Update the merged_rating dataframe with the updated user_ratings dataframe
# merged_rating.update(user_ratings)
# print(merged_rating.head())
# add_rating(merged_rating, original_dict, 1)
# Define a TF-IDF vectorizer for the overview field
tfidf = TfidfVectorizer(stop_words="english")
# Compute the TF-IDF matrix for the overviews
tfidf_matrix = tfidf.fit_transform(merged_df["overview"].fillna(""))
# Compute the cosine similarities between movies based on the overviews
cosine_similarities = cosine_similarity(tfidf_matrix)
del tfidf_matrix
gc.collect()
# pickle.dump(cosine_similarities, open('cosine_similarities.pkl', 'wb'))
# pickle.dump(merged_rating, open('merged_rating.pkl', 'wb'))
# pickle.dump(merged_df, open('merged_df.pkl', 'wb'))
# temp = pickle.load(open('/kaggle/working/merged_rating.pkl', 'rb'))
# temp1 = pickle.load(open('/kaggle/working/merged_df.pkl', 'rb'))
# del temp, temp1
# gc.collect()
# Define the number of recommendations to make
num_recommendations = 20
print(1 in merged_rating["userId"].values)
def get_hybrid_recommendations(
user_id, watch_history, collaborative_model, content_model
):
# Get the user's movie history
user_history = merged_rating[merged_rating["userId"] == user_id]["movieId"].unique()
temp = []
for i in watch_history:
temp.extend(merged_df.loc[merged_df["id"] == i]["movieId"].values)
watch_history = np.array(temp)
if len(watch_history) > 0:
print("append")
user_history = np.append(user_history, watch_history)
user_history = np.unique(user_history)
# Make predictions for all unseen movies using the collaborative model
unseen_movies = np.setdiff1d(merged_df["movieId"].unique(), user_history)
test_movie_ids = np.array(unseen_movies)
test_user_ids = np.array(len(unseen_movies) * [user_id])
test_input = [test_user_ids, test_movie_ids]
unseen_ratings = model_nn_84.predict(test_input).flatten()
unseen_indices = np.argsort(unseen_ratings)[::-1][:num_recommendations]
collaborative_recommendations = unseen_movies[unseen_indices]
# Get the top similar movies to the user's history using the content model
content_recommendations = []
for movie_id in user_history:
movie_index = merged_df[merged_df["movieId"] == movie_id].index[0]
similar_indices = cosine_similarities[movie_index].argsort()[::-1][
1 : num_recommendations + 1
]
content_recommendations += list(
merged_df.iloc[similar_indices]["movieId"].values
)
content_recommendations = np.array(content_recommendations)
# Combine the two lists of recommendations
hybrid_recommendations = np.array(
np.union1d(collaborative_recommendations, content_recommendations)
)
hybrid_test_user_ids = np.array(len(hybrid_recommendations) * [user_id])
hybrid_test_input = [hybrid_test_user_ids, hybrid_recommendations]
hybrid_ratings = model_nn_84.predict(hybrid_test_input).flatten()
hybrid_indices = np.argsort(hybrid_ratings)[::-1][:num_recommendations]
hybrid_recommendations = hybrid_recommendations[hybrid_indices]
del (
user_history,
unseen_movies,
unseen_ratings,
unseen_indices,
collaborative_recommendations,
content_recommendations,
hybrid_ratings,
hybrid_indices,
)
gc.collect()
return hybrid_recommendations
model_nn_84 = pickle.load(open("/kaggle/input/flicktime/model_nn_84.pkl", "rb"))
user_id = 1
watch_history = [434, 85, 12506, 16320, 186, 388]
hybrid_recommendations = get_hybrid_recommendations(
user_id, watch_history, model_nn_84, cosine_similarities
)
recommended_movies = merged_df[merged_df["movieId"].isin(hybrid_recommendations)]
recommended_movies[["id", "overview", "title"]]
# 434, 85, 12506, 16320, 186, 388
| false | 2 | 2,125 | 1 | 2,145 | 2,125 |
||
129608134
|
<jupyter_start><jupyter_text>Car damage detection
Kaggle dataset identifier: car-damage-detection
<jupyter_script>import random
import numpy as np
import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from tqdm.notebook import tqdm
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import img_to_array
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import load_img
import scipy
import os
seed = 0
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
DIRECTORY = "/kaggle/input/car-damage-detection/data1a/training/"
DIRECTORY2 = "/kaggle/input/car-damage-detection/data1a/validation/"
CATEGORIES = ["00-damage", "01-whole"]
# grab the list of images in our dataset directory, then initialize
# the list of data (i.e., images) and class images
print("[INFO] loading images...")
data = []
labels = []
for category in CATEGORIES:
path = os.path.join(DIRECTORY, category)
for img in tqdm(os.listdir(path)):
img_path = os.path.join(path, img)
image = load_img(img_path, target_size=(224, 224))
image = img_to_array(image)
image = preprocess_input(image)
data.append(image)
labels.append(category)
for category in CATEGORIES:
path = os.path.join(DIRECTORY2, category)
for img in tqdm(os.listdir(path)):
img_path = os.path.join(path, img)
image = load_img(img_path, target_size=(224, 224))
image = img_to_array(image)
image = preprocess_input(image)
data.append(image)
labels.append(category)
# use 0 for non damaged cars and 0 for the ones who are damaged
labels = [0 if element == "01-whole" else 1 for element in labels]
labels = to_categorical(labels)
data = np.array(data, dtype="float32")
labels = np.array(labels)
(trainX, testX, trainY, testY) = train_test_split(
data, labels, test_size=0.20, stratify=labels, random_state=42
)
num_classes = 2
resnet_weights_path = (
"/kaggle/input/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5"
)
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling="avg"))
my_new_model.add(Dense(num_classes, activation="softmax"))
# Say not to train first layer (ResNet) model. It is already trained
my_new_model.layers[0].trainable = False
my_new_model.summary()
my_new_model.compile(
optimizer="sgd", loss="categorical_crossentropy", metrics=["accuracy"]
)
image_size = 224
EPOCHS = 50
BS = 64
# construct the training image generator for data augmentation
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest",
)
H = my_new_model.fit(
aug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS,
epochs=EPOCHS,
)
# Load the image
test_image = image.load_img(
"/kaggle/input/imagenonacci/anticonstitutionalism.jpg",
target_size=(image_size, image_size),
)
# Convert the image to an array
test_image_arr = image.img_to_array(test_image)
# Preprocess the image
test_image_preprocessed = preprocess_input(test_image_arr)
test_image_preprocessed = np.expand_dims(test_image_preprocessed, axis=0)
# Get the prediction for the test image
prediction = my_new_model.predict(test_image_preprocessed)
# Print the predicted class
if prediction[0][0] > prediction[0][1]:
print("Non-accidented car")
else:
print("Accidented car")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/608/129608134.ipynb
|
car-damage-detection
|
anujms
|
[{"Id": 129608134, "ScriptId": 38065757, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11426286, "CreationDate": "05/15/2023 07:52:05", "VersionNumber": 1.0, "Title": "notebookd41145691a", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 121.0, "LinesInsertedFromPrevious": 121.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185849354, "KernelVersionId": 129608134, "SourceDatasetVersionId": 575693}]
|
[{"Id": 575693, "DatasetId": 278578, "DatasourceVersionId": 592402, "CreatorUserId": 3327828, "LicenseName": "Unknown", "CreationDate": "07/27/2019 15:19:27", "VersionNumber": 1.0, "Title": "Car damage detection", "Slug": "car-damage-detection", "Subtitle": "Damaged and Whole cars image dataset", "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 129459883.0, "TotalUncompressedBytes": 129459883.0}]
|
[{"Id": 278578, "CreatorUserId": 3327828, "OwnerUserId": 3327828.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 575693.0, "CurrentDatasourceVersionId": 592402.0, "ForumId": 289959, "Type": 2, "CreationDate": "07/27/2019 15:19:27", "LastActivityDate": "07/27/2019", "TotalViews": 44814, "TotalDownloads": 4541, "TotalVotes": 97, "TotalKernels": 13}]
|
[{"Id": 3327828, "UserName": "anujms", "DisplayName": "Anuj Shah", "RegisterDate": "06/08/2019", "PerformanceTier": 0}]
|
import random
import numpy as np
import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from tqdm.notebook import tqdm
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import img_to_array
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import load_img
import scipy
import os
seed = 0
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
DIRECTORY = "/kaggle/input/car-damage-detection/data1a/training/"
DIRECTORY2 = "/kaggle/input/car-damage-detection/data1a/validation/"
CATEGORIES = ["00-damage", "01-whole"]
# grab the list of images in our dataset directory, then initialize
# the list of data (i.e., images) and class images
print("[INFO] loading images...")
data = []
labels = []
for category in CATEGORIES:
path = os.path.join(DIRECTORY, category)
for img in tqdm(os.listdir(path)):
img_path = os.path.join(path, img)
image = load_img(img_path, target_size=(224, 224))
image = img_to_array(image)
image = preprocess_input(image)
data.append(image)
labels.append(category)
for category in CATEGORIES:
path = os.path.join(DIRECTORY2, category)
for img in tqdm(os.listdir(path)):
img_path = os.path.join(path, img)
image = load_img(img_path, target_size=(224, 224))
image = img_to_array(image)
image = preprocess_input(image)
data.append(image)
labels.append(category)
# use 0 for non damaged cars and 0 for the ones who are damaged
labels = [0 if element == "01-whole" else 1 for element in labels]
labels = to_categorical(labels)
data = np.array(data, dtype="float32")
labels = np.array(labels)
(trainX, testX, trainY, testY) = train_test_split(
data, labels, test_size=0.20, stratify=labels, random_state=42
)
num_classes = 2
resnet_weights_path = (
"/kaggle/input/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5"
)
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling="avg"))
my_new_model.add(Dense(num_classes, activation="softmax"))
# Say not to train first layer (ResNet) model. It is already trained
my_new_model.layers[0].trainable = False
my_new_model.summary()
my_new_model.compile(
optimizer="sgd", loss="categorical_crossentropy", metrics=["accuracy"]
)
image_size = 224
EPOCHS = 50
BS = 64
# construct the training image generator for data augmentation
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest",
)
H = my_new_model.fit(
aug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS,
epochs=EPOCHS,
)
# Load the image
test_image = image.load_img(
"/kaggle/input/imagenonacci/anticonstitutionalism.jpg",
target_size=(image_size, image_size),
)
# Convert the image to an array
test_image_arr = image.img_to_array(test_image)
# Preprocess the image
test_image_preprocessed = preprocess_input(test_image_arr)
test_image_preprocessed = np.expand_dims(test_image_preprocessed, axis=0)
# Get the prediction for the test image
prediction = my_new_model.predict(test_image_preprocessed)
# Print the predicted class
if prediction[0][0] > prediction[0][1]:
print("Non-accidented car")
else:
print("Accidented car")
| false | 0 | 1,164 | 0 | 1,186 | 1,164 |
||
129608063
|
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
train.head()
train["clonesize"].max() - train["clonesize"].min()
clone_mean = train["clonesize"].mean()
train["clonesize"] = train["clonesize"].apply(lambda x: (x - clone_mean) / 30.0)
Max_upperTrange_mean = train["MaxOfUpperTRange"].mean()
train["MaxOfUpperTRange"] = train["MaxOfUpperTRange"].apply(
lambda x: (x - Max_upperTrange_mean) / 24.9
)
Min_upperTrange_mean = train["MinOfUpperTRange"].mean()
train["MinOfUpperTRange"] = train["MinOfUpperTRange"].apply(
lambda x: (x - Min_upperTrange_mean) / 18.2
)
avg_upperTrange_mean = train["AverageOfUpperTRange"].mean()
train["AverageOfUpperTRange"] = train["AverageOfUpperTRange"].apply(
lambda x: (x - avg_upperTrange_mean) / 20.8
)
Max_lowerTrange_mean = train["MaxOfLowerTRange"].mean()
train["MaxOfLowerTRange"] = train["MaxOfLowerTRange"].apply(
lambda x: (x - Max_lowerTrange_mean) / 18.0
)
Min_lowerTrange_mean = train["MinOfLowerTRange"].mean()
train["MinOfLowerTRange"] = train["MinOfLowerTRange"].apply(
lambda x: (x - Min_lowerTrange_mean) / 8.7
)
avg_lowerTrange_mean = train["AverageOfLowerTRange"].mean()
train["AverageOfLowerTRange"] = train["AverageOfLowerTRange"].apply(
lambda x: (x - avg_lowerTrange_mean) / 14.7
)
rain_days = train["RainingDays"].mean()
train["RainingDays"] = train["RainingDays"].apply(lambda x: (x - rain_days) / 33.0)
seeds = train["seeds"].mean()
train["seeds"] = train["seeds"].apply(lambda x: (x - seeds) / 24.50)
train
train.columns
features = [
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
]
X = train[features]
y = train["yield"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
print(X.shape, X_train.shape, X_test.shape)
model = LinearRegression()
model.fit(X_train, y_train)
preds = model.predict(X_test)
preds
mae = mean_absolute_error(y_test, preds)
print(mae)
id = test["id"]
test = test[features]
clone_mean = test["clonesize"].mean()
r = test["clonesize"].max() - test["clonesize"].min()
test["clonesize"] = test["clonesize"].apply(lambda x: (x - clone_mean) / r)
Max_upperTrange_mean = test["MaxOfUpperTRange"].mean()
test["MaxOfUpperTRange"] = test["MaxOfUpperTRange"].apply(
lambda x: (x - Max_upperTrange_mean) / r
)
Min_upperTrange_mean = test["MinOfUpperTRange"].mean()
test["MinOfUpperTRange"] = test["MinOfUpperTRange"].apply(
lambda x: (x - Min_upperTrange_mean) / r
)
avg_upperTrange_mean = test["AverageOfUpperTRange"].mean()
test["AverageOfUpperTRange"] = test["AverageOfUpperTRange"].apply(
lambda x: (x - avg_upperTrange_mean) / r
)
Max_lowerTrange_mean = test["MaxOfLowerTRange"].mean()
test["MaxOfLowerTRange"] = test["MaxOfLowerTRange"].apply(
lambda x: (x - Max_lowerTrange_mean) / r
)
Min_lowerTrange_mean = test["MinOfLowerTRange"].mean()
test["MinOfLowerTRange"] = test["MinOfLowerTRange"].apply(
lambda x: (x - Min_lowerTrange_mean) / r
)
avg_lowerTrange_mean = test["AverageOfLowerTRange"].mean()
test["AverageOfLowerTRange"] = test["AverageOfLowerTRange"].apply(
lambda x: (x - avg_lowerTrange_mean) / r
)
rain_days = test["RainingDays"].mean()
test["RainingDays"] = test["RainingDays"].apply(lambda x: (x - rain_days) / r)
seeds = test["seeds"].mean()
test["seeds"] = test["seeds"].apply(lambda x: (x - seeds) / r)
cols = [
"clonesize",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for col in cols:
mean = test[col].mean()
rang = test[col].max() - test[col].min()
test[col] = test[col].apply(lambda x: (x - mean) / rang)
model.fit(X, y)
predictions = model.predict(test)
predictions
final = pd.DataFrame()
final.index = id
final["yield"] = predictions
final.to_csv("submission.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/608/129608063.ipynb
| null | null |
[{"Id": 129608063, "ScriptId": 38537243, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10554992, "CreationDate": "05/15/2023 07:51:38", "VersionNumber": 2.0, "Title": "Blueberry prediction", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 114.0, "LinesInsertedFromPrevious": 69.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 45.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
train.head()
train["clonesize"].max() - train["clonesize"].min()
clone_mean = train["clonesize"].mean()
train["clonesize"] = train["clonesize"].apply(lambda x: (x - clone_mean) / 30.0)
Max_upperTrange_mean = train["MaxOfUpperTRange"].mean()
train["MaxOfUpperTRange"] = train["MaxOfUpperTRange"].apply(
lambda x: (x - Max_upperTrange_mean) / 24.9
)
Min_upperTrange_mean = train["MinOfUpperTRange"].mean()
train["MinOfUpperTRange"] = train["MinOfUpperTRange"].apply(
lambda x: (x - Min_upperTrange_mean) / 18.2
)
avg_upperTrange_mean = train["AverageOfUpperTRange"].mean()
train["AverageOfUpperTRange"] = train["AverageOfUpperTRange"].apply(
lambda x: (x - avg_upperTrange_mean) / 20.8
)
Max_lowerTrange_mean = train["MaxOfLowerTRange"].mean()
train["MaxOfLowerTRange"] = train["MaxOfLowerTRange"].apply(
lambda x: (x - Max_lowerTrange_mean) / 18.0
)
Min_lowerTrange_mean = train["MinOfLowerTRange"].mean()
train["MinOfLowerTRange"] = train["MinOfLowerTRange"].apply(
lambda x: (x - Min_lowerTrange_mean) / 8.7
)
avg_lowerTrange_mean = train["AverageOfLowerTRange"].mean()
train["AverageOfLowerTRange"] = train["AverageOfLowerTRange"].apply(
lambda x: (x - avg_lowerTrange_mean) / 14.7
)
rain_days = train["RainingDays"].mean()
train["RainingDays"] = train["RainingDays"].apply(lambda x: (x - rain_days) / 33.0)
seeds = train["seeds"].mean()
train["seeds"] = train["seeds"].apply(lambda x: (x - seeds) / 24.50)
train
train.columns
features = [
"clonesize",
"honeybee",
"bumbles",
"andrena",
"osmia",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
]
X = train[features]
y = train["yield"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
print(X.shape, X_train.shape, X_test.shape)
model = LinearRegression()
model.fit(X_train, y_train)
preds = model.predict(X_test)
preds
mae = mean_absolute_error(y_test, preds)
print(mae)
id = test["id"]
test = test[features]
clone_mean = test["clonesize"].mean()
r = test["clonesize"].max() - test["clonesize"].min()
test["clonesize"] = test["clonesize"].apply(lambda x: (x - clone_mean) / r)
Max_upperTrange_mean = test["MaxOfUpperTRange"].mean()
test["MaxOfUpperTRange"] = test["MaxOfUpperTRange"].apply(
lambda x: (x - Max_upperTrange_mean) / r
)
Min_upperTrange_mean = test["MinOfUpperTRange"].mean()
test["MinOfUpperTRange"] = test["MinOfUpperTRange"].apply(
lambda x: (x - Min_upperTrange_mean) / r
)
avg_upperTrange_mean = test["AverageOfUpperTRange"].mean()
test["AverageOfUpperTRange"] = test["AverageOfUpperTRange"].apply(
lambda x: (x - avg_upperTrange_mean) / r
)
Max_lowerTrange_mean = test["MaxOfLowerTRange"].mean()
test["MaxOfLowerTRange"] = test["MaxOfLowerTRange"].apply(
lambda x: (x - Max_lowerTrange_mean) / r
)
Min_lowerTrange_mean = test["MinOfLowerTRange"].mean()
test["MinOfLowerTRange"] = test["MinOfLowerTRange"].apply(
lambda x: (x - Min_lowerTrange_mean) / r
)
avg_lowerTrange_mean = test["AverageOfLowerTRange"].mean()
test["AverageOfLowerTRange"] = test["AverageOfLowerTRange"].apply(
lambda x: (x - avg_lowerTrange_mean) / r
)
rain_days = test["RainingDays"].mean()
test["RainingDays"] = test["RainingDays"].apply(lambda x: (x - rain_days) / r)
seeds = test["seeds"].mean()
test["seeds"] = test["seeds"].apply(lambda x: (x - seeds) / r)
cols = [
"clonesize",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"RainingDays",
"seeds",
]
for col in cols:
mean = test[col].mean()
rang = test[col].max() - test[col].min()
test[col] = test[col].apply(lambda x: (x - mean) / rang)
model.fit(X, y)
predictions = model.predict(test)
predictions
final = pd.DataFrame()
final.index = id
final["yield"] = predictions
final.to_csv("submission.csv")
| false | 0 | 1,553 | 0 | 1,553 | 1,553 |
||
129608829
|
<jupyter_start><jupyter_text>Students Performance in Exams
### Context
Marks secured by the students
### Content
This data set consists of the marks secured by the students in various subjects.
Kaggle dataset identifier: students-performance-in-exams
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## I will use Advertising Dataset
df = pd.read_csv("/kaggle/input/advertising-dataset/advertising.csv")
df
# ## I will use two columns, reading score and math score, reading score will be independant and math score will be dependant (predict math score based on the reading score)
TV = df["TV"].values
TV
# len (TV)
Sales = df["Sales"].values
Sales
# len (Sales)
# ## Now will see the shape
from matplotlib import pyplot as plt
x = TV # x: independent variable
y = Sales # y : dependent variable
plt.scatter(x, y, color="black")
plt.xlabel("TV")
plt.ylabel("Sales")
plt.plot
# ## The Shape is linear, so I will continue
# ### Create a Column Vector
x = x.reshape(-1, 1)
len(x)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size=0.90, random_state=600
)
# i will set 85% of data for train and 15% for test
# x_train
len(x_train)
# x_test
len(x_test)
# ## presenting the data that for machine learning modeling
plt.scatter(x_train, y_train, color="red")
plt.xlabel("Reading score")
plt.ylabel("Math score")
plt.plot
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x_train, y_train)
y_predict = lr.predict([[3.0], [4.5], [2.1]])
y_predict
lr.score(x_test, y_test) * 100
# ## Here i can see the predicted 100 values for test
y_predict = lr.predict(x_test)
y_predict
plt.scatter(x_train, y_train, color="red")
plt.scatter(x_test, y_predict, color="blue")
plt.xlabel("Reading score")
plt.ylabel("Math score")
plt.plot
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/608/129608829.ipynb
|
students-performance-in-exams
|
spscientist
|
[{"Id": 129608829, "ScriptId": 38523744, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14996764, "CreationDate": "05/15/2023 07:57:59", "VersionNumber": 2.0, "Title": "Linear Regression", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 93.0, "LinesInsertedFromPrevious": 25.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 68.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185850843, "KernelVersionId": 129608829, "SourceDatasetVersionId": 169835}, {"Id": 185850844, "KernelVersionId": 129608829, "SourceDatasetVersionId": 317184}]
|
[{"Id": 169835, "DatasetId": 74977, "DatasourceVersionId": 180443, "CreatorUserId": 2094163, "LicenseName": "Unknown", "CreationDate": "11/09/2018 18:25:25", "VersionNumber": 1.0, "Title": "Students Performance in Exams", "Slug": "students-performance-in-exams", "Subtitle": "Marks secured by the students in various subjects", "Description": "### Context\n\nMarks secured by the students\n\n\n### Content\n\nThis data set consists of the marks secured by the students in various subjects. \n\n\n### Acknowledgements\n\nhttp://roycekimmons.com/tools/generated_data/exams\n\n\n### Inspiration\n\nTo understand the influence of the parents background, test preparation etc on students performance", "VersionNotes": "Initial release", "TotalCompressedBytes": 72036.0, "TotalUncompressedBytes": 72036.0}]
|
[{"Id": 74977, "CreatorUserId": 2094163, "OwnerUserId": 2094163.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 169835.0, "CurrentDatasourceVersionId": 180443.0, "ForumId": 84238, "Type": 2, "CreationDate": "11/09/2018 18:25:25", "LastActivityDate": "11/09/2018", "TotalViews": 1423654, "TotalDownloads": 235440, "TotalVotes": 3848, "TotalKernels": 1151}]
|
[{"Id": 2094163, "UserName": "spscientist", "DisplayName": "Jakki Seshapanpu", "RegisterDate": "07/24/2018", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## I will use Advertising Dataset
df = pd.read_csv("/kaggle/input/advertising-dataset/advertising.csv")
df
# ## I will use two columns, reading score and math score, reading score will be independant and math score will be dependant (predict math score based on the reading score)
TV = df["TV"].values
TV
# len (TV)
Sales = df["Sales"].values
Sales
# len (Sales)
# ## Now will see the shape
from matplotlib import pyplot as plt
x = TV # x: independent variable
y = Sales # y : dependent variable
plt.scatter(x, y, color="black")
plt.xlabel("TV")
plt.ylabel("Sales")
plt.plot
# ## The Shape is linear, so I will continue
# ### Create a Column Vector
x = x.reshape(-1, 1)
len(x)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size=0.90, random_state=600
)
# i will set 85% of data for train and 15% for test
# x_train
len(x_train)
# x_test
len(x_test)
# ## presenting the data that for machine learning modeling
plt.scatter(x_train, y_train, color="red")
plt.xlabel("Reading score")
plt.ylabel("Math score")
plt.plot
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x_train, y_train)
y_predict = lr.predict([[3.0], [4.5], [2.1]])
y_predict
lr.score(x_test, y_test) * 100
# ## Here i can see the predicted 100 values for test
y_predict = lr.predict(x_test)
y_predict
plt.scatter(x_train, y_train, color="red")
plt.scatter(x_test, y_predict, color="blue")
plt.xlabel("Reading score")
plt.ylabel("Math score")
plt.plot
| false | 1 | 711 | 0 | 771 | 711 |
||
129772633
|
<jupyter_start><jupyter_text>Top 10000 popular Movies TMDB
This is a collection of metadata about the top 10,000 most popular movies on **The Movie Database (TMDB)** as of May 2023. The dataset includes information such as movie titles, release dates, runtime, genres, production companies, budget, and revenue. This data is collected from TMDB's public [API](https://developer.themoviedb.org/docs).
#### Little bit about [TMDB](https://www.themoviedb.org/)
TMDB (The Movie Database) is a popular online database and community platform that provides a vast collection of information about movies, TV shows, and other related content. TMDB allows users to browse and search for movies and TV shows, view information such as cast, crew, synopsis, and ratings, and also contribute to the community by adding their own reviews, ratings, and other content.
#### Purpose
The dataset is intended for use by data analysts, researchers, and developers who are interested in studying or analyzing the popularity and characteristics of movies. The dataset can be used to perform a wide range of analyses, such as exploring trends in movie genres over time, identifying patterns in movie budgets and revenues, and analyzing the impact of different attributes on a movie's popularity.
####Attributes
- **id**: Unique identifier assigned to each movie in the TMDB database.
- **title**: Title of the movie.
- **release_date**: Date on which the movie was released.
- **genres**: List of genres associated with the movie.
- **original_language**: Language in which the movie was originally produced.
- **vote_average**: Average rating given to the movie by TMDB users.
- **vote_count**: Number of votes cast for the movie on TMDB.
- **popularity**: Popularity score assigned to the movie by TMDB based on user engagement.
- **overview**: Brief description or synopsis of the movie.
- **budget**: Estimated budget for producing the movie in USD.
- **production_companies**: List of production companies involved in making the movie.
- **revenue**: Total revenue generated by the movie in USD.
- **runtime**: Total runtime of the movie in minutes.
- **tagline**: Short, memorable phrase associated with the movie, often used in promotional material.
#### [Dataset Creation](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook)
The dataset mentioned has been created by fetching raw data from TMDB's public API, and then cleaning and preprocessing the data to improve its quality and make it easier to work with. The cleaning process has been done using a notebook available [here](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook), which outlines the steps taken to transform the raw data into a more usable format.
Kaggle dataset identifier: top-10000-popular-movies-tmdb-05-2023
<jupyter_script># /kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv
import pandas as pd
data = pd.read_csv(
"/kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv"
)
data.isnull().sum()
# For my analysys I dont need tagline and overview, So I'm gonna delete those columns
data.drop(["tagline", "overview"], axis="columns", inplace=True)
# Deleting rows with null date
data.dropna(inplace=True)
data["year"] = data["release_date"].apply(lambda x: int(x[:4]))
grpData = data.groupby("year", as_index=False)
grpData = grpData.count()[["year", "id"]]
grpData = grpData[(grpData["year"] >= 2000) & (grpData["year"] <= 2023)]
import matplotlib.pyplot as plt
plt.plot(grpData["year"], grpData["id"])
plt.ylabel("Number of movies")
plt.xlabel("Year")
# getting all available genres
genre = []
import ast
def getGenre(arr):
arr = ast.literal_eval(arr)
for i in arr:
genre.append(i)
return arr
data["genres"].apply(lambda x: getGenre(x))
genre = set(genre) # Here we got all the genres
genre
# Let see genre wise movie count
genre_count = {}
for g in genre:
genre_count[g] = [0]
def getCount(arr):
for g in genre:
if g in arr:
genre_count[g][0] += 1
data["genres"].apply(lambda x: getCount(x))
genre_count = pd.DataFrame(genre_count).transpose()
plt.bar(genre_count.index, height=genre_count[0])
plt.xticks(rotation=80)
plt.ylabel("Number of movies")
plt.xlabel("Genre of movie")
# genre_count
# This graphs shows :- **Production of *Drama* genre is very high compared to others**
genre_count = {}
for g in genre:
genre_count[g] = [0, 0]
def getCount(i):
arr = data["genres"][i]
rating = data["vote_average"][i]
for g in genre:
if g in arr:
curr = genre_count[g][0] * genre_count[g][1] + rating
genre_count[g][0] += 1
genre_count[g][1] = round(curr / genre_count[g][0], 1)
data.index.map(lambda x: getCount(x))
genre_count = pd.DataFrame(genre_count).transpose()
plt.bar(genre_count.index, height=genre_count[1])
plt.xticks(rotation=80)
# genre_count
plt.ylabel("Average rating")
plt.xlabel("Genre of movies")
# This graph shows : "**Although production of *Drama* genre is higher but *War* genre's rating is higher**"
# Getting Production companies list
Production = []
def getProduction(arr):
arr = ast.literal_eval(arr)
for i in arr:
Production.append(i)
return arr
data["production_companies"].apply(lambda x: getProduction(x))
Production = set(Production) # Here we got all the Production Companies
Production
production_count = {}
for p in Production:
production_count[p] = [0]
def getCount(arr):
for p in Production:
if p in arr:
production_count[p][0] += 1
data["production_companies"].apply(lambda x: getCount(x))
production_count = pd.DataFrame(production_count).transpose()
production_count = production_count.sort_values(0, ascending=False).head(10)
plt.bar(production_count.index, height=production_count[0])
plt.xticks(rotation=80)
plt.ylabel("Number of movies")
plt.xlabel("Top 10 production Companies")
# This shows : **Warner Bros. Pictures** *is making most of the movies*
budget = data.sort_values("budget", ascending=False).head(10)[["title", "budget"]]
plt.bar(budget["title"], height=budget["budget"])
plt.xticks(rotation=90)
plt.ylabel("Budget (1 unit = 10million)")
plt.xlabel("Name of the movie")
language = (
data["original_language"].value_counts().sort_values(ascending=False).head(10)
)
plt.bar(language.index, height=language)
plt.xticks(rotation=90)
plt.ylabel("Number of movies")
plt.xlabel("Language")
data
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/772/129772633.ipynb
|
top-10000-popular-movies-tmdb-05-2023
|
ursmaheshj
|
[{"Id": 129772633, "ScriptId": 38590502, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14688602, "CreationDate": "05/16/2023 11:14:38", "VersionNumber": 1.0, "Title": "notebook687af5fb32", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 128.0, "LinesInsertedFromPrevious": 128.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186133987, "KernelVersionId": 129772633, "SourceDatasetVersionId": 5643863}]
|
[{"Id": 5643863, "DatasetId": 3240464, "DatasourceVersionId": 5719190, "CreatorUserId": 7397148, "LicenseName": "CC0: Public Domain", "CreationDate": "05/09/2023 13:43:53", "VersionNumber": 4.0, "Title": "Top 10000 popular Movies TMDB", "Slug": "top-10000-popular-movies-tmdb-05-2023", "Subtitle": "A Comprehensive Collection of Metadata for the Top 10,000 Popular Movies on TMDB", "Description": "This is a collection of metadata about the top 10,000 most popular movies on **The Movie Database (TMDB)** as of May 2023. The dataset includes information such as movie titles, release dates, runtime, genres, production companies, budget, and revenue. This data is collected from TMDB's public [API](https://developer.themoviedb.org/docs). \n\n#### Little bit about [TMDB](https://www.themoviedb.org/)\nTMDB (The Movie Database) is a popular online database and community platform that provides a vast collection of information about movies, TV shows, and other related content. TMDB allows users to browse and search for movies and TV shows, view information such as cast, crew, synopsis, and ratings, and also contribute to the community by adding their own reviews, ratings, and other content.\n\n#### Purpose\nThe dataset is intended for use by data analysts, researchers, and developers who are interested in studying or analyzing the popularity and characteristics of movies. The dataset can be used to perform a wide range of analyses, such as exploring trends in movie genres over time, identifying patterns in movie budgets and revenues, and analyzing the impact of different attributes on a movie's popularity.\n\n####Attributes\n- **id**: Unique identifier assigned to each movie in the TMDB database.\n- **title**: Title of the movie.\n- **release_date**: Date on which the movie was released.\n- **genres**: List of genres associated with the movie.\n- **original_language**: Language in which the movie was originally produced.\n- **vote_average**: Average rating given to the movie by TMDB users.\n- **vote_count**: Number of votes cast for the movie on TMDB.\n- **popularity**: Popularity score assigned to the movie by TMDB based on user engagement.\n- **overview**: Brief description or synopsis of the movie.\n- **budget**: Estimated budget for producing the movie in USD.\n- **production_companies**: List of production companies involved in making the movie.\n- **revenue**: Total revenue generated by the movie in USD.\n- **runtime**: Total runtime of the movie in minutes.\n- **tagline**: Short, memorable phrase associated with the movie, often used in promotional material.\n\n#### [Dataset Creation](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook)\nThe dataset mentioned has been created by fetching raw data from TMDB's public API, and then cleaning and preprocessing the data to improve its quality and make it easier to work with. The cleaning process has been done using a notebook available [here](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook), which outlines the steps taken to transform the raw data into a more usable format.", "VersionNotes": "Data Update 2023-05-09", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3240464, "CreatorUserId": 7397148, "OwnerUserId": 7397148.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5921776.0, "CurrentDatasourceVersionId": 5999208.0, "ForumId": 3305699, "Type": 2, "CreationDate": "05/08/2023 19:50:26", "LastActivityDate": "05/08/2023", "TotalViews": 7400, "TotalDownloads": 1454, "TotalVotes": 37, "TotalKernels": 10}]
|
[{"Id": 7397148, "UserName": "ursmaheshj", "DisplayName": "Mahesh Jadhav", "RegisterDate": "05/11/2021", "PerformanceTier": 1}]
|
# /kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv
import pandas as pd
data = pd.read_csv(
"/kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv"
)
data.isnull().sum()
# For my analysys I dont need tagline and overview, So I'm gonna delete those columns
data.drop(["tagline", "overview"], axis="columns", inplace=True)
# Deleting rows with null date
data.dropna(inplace=True)
data["year"] = data["release_date"].apply(lambda x: int(x[:4]))
grpData = data.groupby("year", as_index=False)
grpData = grpData.count()[["year", "id"]]
grpData = grpData[(grpData["year"] >= 2000) & (grpData["year"] <= 2023)]
import matplotlib.pyplot as plt
plt.plot(grpData["year"], grpData["id"])
plt.ylabel("Number of movies")
plt.xlabel("Year")
# getting all available genres
genre = []
import ast
def getGenre(arr):
arr = ast.literal_eval(arr)
for i in arr:
genre.append(i)
return arr
data["genres"].apply(lambda x: getGenre(x))
genre = set(genre) # Here we got all the genres
genre
# Let see genre wise movie count
genre_count = {}
for g in genre:
genre_count[g] = [0]
def getCount(arr):
for g in genre:
if g in arr:
genre_count[g][0] += 1
data["genres"].apply(lambda x: getCount(x))
genre_count = pd.DataFrame(genre_count).transpose()
plt.bar(genre_count.index, height=genre_count[0])
plt.xticks(rotation=80)
plt.ylabel("Number of movies")
plt.xlabel("Genre of movie")
# genre_count
# This graphs shows :- **Production of *Drama* genre is very high compared to others**
genre_count = {}
for g in genre:
genre_count[g] = [0, 0]
def getCount(i):
arr = data["genres"][i]
rating = data["vote_average"][i]
for g in genre:
if g in arr:
curr = genre_count[g][0] * genre_count[g][1] + rating
genre_count[g][0] += 1
genre_count[g][1] = round(curr / genre_count[g][0], 1)
data.index.map(lambda x: getCount(x))
genre_count = pd.DataFrame(genre_count).transpose()
plt.bar(genre_count.index, height=genre_count[1])
plt.xticks(rotation=80)
# genre_count
plt.ylabel("Average rating")
plt.xlabel("Genre of movies")
# This graph shows : "**Although production of *Drama* genre is higher but *War* genre's rating is higher**"
# Getting Production companies list
Production = []
def getProduction(arr):
arr = ast.literal_eval(arr)
for i in arr:
Production.append(i)
return arr
data["production_companies"].apply(lambda x: getProduction(x))
Production = set(Production) # Here we got all the Production Companies
Production
production_count = {}
for p in Production:
production_count[p] = [0]
def getCount(arr):
for p in Production:
if p in arr:
production_count[p][0] += 1
data["production_companies"].apply(lambda x: getCount(x))
production_count = pd.DataFrame(production_count).transpose()
production_count = production_count.sort_values(0, ascending=False).head(10)
plt.bar(production_count.index, height=production_count[0])
plt.xticks(rotation=80)
plt.ylabel("Number of movies")
plt.xlabel("Top 10 production Companies")
# This shows : **Warner Bros. Pictures** *is making most of the movies*
budget = data.sort_values("budget", ascending=False).head(10)[["title", "budget"]]
plt.bar(budget["title"], height=budget["budget"])
plt.xticks(rotation=90)
plt.ylabel("Budget (1 unit = 10million)")
plt.xlabel("Name of the movie")
language = (
data["original_language"].value_counts().sort_values(ascending=False).head(10)
)
plt.bar(language.index, height=language)
plt.xticks(rotation=90)
plt.ylabel("Number of movies")
plt.xlabel("Language")
data
| false | 1 | 1,204 | 0 | 1,902 | 1,204 |
||
129772682
|
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk(
"/kaggle/input/icr-identify-age-related-conditions"
):
for filename in filenames:
print(os.path.join(dirname, filename))
import numpy as np
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from xgboost import XGBClassifier
# Load data
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
test = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
# Convert 'EJ' to one-hot encoding
train = pd.get_dummies(train, columns=["EJ"])
test = pd.get_dummies(test, columns=["EJ"])
# If test data EJ has only one category, we ensure to have the same structure as in train set
if "EJ_B" not in test.columns:
test["EJ_B"] = 0
# Define feature columns
feature_cols = train.drop(["Id", "Class"], axis=1).columns
# Define target column
target_col = "Class"
# Create the preprocessing pipelines for both numeric and categorical data
numeric_features = (
train[feature_cols].select_dtypes(include=["int64", "float64"]).columns
)
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler())]
)
categorical_features = train[feature_cols].select_dtypes(include=["object"]).columns
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features),
]
)
# Append classifier to preprocessing pipeline
clf = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", XGBClassifier(use_label_encoder=False, eval_metric="logloss")),
]
)
# Split data into features and target
X_train = train[feature_cols]
y_train = train[target_col]
# Fit the model
clf.fit(X_train, y_train)
# Create a DataFrame for the probabilities
probabilities = pd.DataFrame(
clf.predict_proba(test[feature_cols]), columns=[f"class_{i}" for i in range(2)]
)
# Concatenate the test IDs with their associated probabilities
submission = pd.concat([test["Id"], probabilities], axis=1)
# Save the DataFrame to a csv file
submission.to_csv("submission.csv", index=False)
submission.head()
# TESTING THE MODEL (no submit!)
from sklearn.metrics import log_loss
import xgboost as xgb
model_xgb = xgb.XGBClassifier(use_label_encoder=False, eval_metric="logloss")
# Fit the model with the training data
model_xgb.fit(X_train, y_train)
# Get probabilities instead of predicted labels, since log loss is a probabilistic metric
y_train_proba = model_xgb.predict_proba(X_train)
y_val_proba = model_xgb.predict_proba(X_val)
# Calculate log loss for training and validation sets
train_log_loss = log_loss(y_train, y_train_proba)
val_log_loss = log_loss(y_val, y_val_proba)
print(f"Train Log Loss: {train_log_loss}")
print(f"Validation Log Loss: {val_log_loss}")
# test the statistical significance of the parameters using Logistic Regression
import statsmodels.api as sm
# Preprocess the data
X_train_preprocessed = preprocessor.fit_transform(X_train)
# Add a constant to the features
X_train_with_constant = sm.add_constant(X_train_preprocessed)
# Fit the logistic regression model
model = sm.Logit(y_train, X_train_with_constant)
result = model.fit()
# Print the summary
print(result.summary())
# use cross-validation to find the best value of lambda (via LassoCV),
# fit a Lasso model, and get the absolute values of the coefficients
from sklearn.linear_model import LassoCV
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
# Define your preprocessing pipeline
preprocessing = make_pipeline(SimpleImputer(strategy="mean"), StandardScaler())
# Apply preprocessing
X_train_preprocessed = preprocessing.fit_transform(X_train)
X_val_preprocessed = preprocessing.transform(X_val)
# Initialize a LassoCV model
lasso = LassoCV(cv=5)
# Fit the LassoCV model on the preprocessed data
lasso.fit(X_train_preprocessed, y_train)
# Get the feature importance (the coefficients of each feature)
importance = np.abs(lasso.coef_)
# Get the features selected by Lasso (features with non-zero coefficients)
features_selected = X_train.columns[importance > 0]
print("Features selected by Lasso:")
print(features_selected)
# Identify multicollinearity by computing the Variance Inflation Factor (VIF) for each feature in the model
import pandas as pd
import numpy as np
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
# Load data
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
# Convert 'EJ' to one-hot encoding
train = pd.get_dummies(train, columns=["EJ"])
# If train data EJ has only one category, we ensure to have the same structure as in train set
if "EJ_B" not in train.columns:
train["EJ_B"] = 0
# Define feature columns
feature_cols = train.drop(["Id", "Class"], axis=1).columns
# Preprocess the data
preprocessing = make_pipeline(SimpleImputer(strategy="mean"), StandardScaler())
X = preprocessing.fit_transform(train[feature_cols])
# Compute VIF for each feature
vif = pd.DataFrame()
vif["features"] = feature_cols
vif["VIF"] = [variance_inflation_factor(X, i) for i in range(X.shape[1])]
print(vif)
# From the output, we see that variables 'DV', 'EH', 'FD', 'GL', 'EJ_A', and 'EJ_B' have a high
# variance inflation factor (VIF > 8), suggesting a high level of multicollinearity.
# The 'inf' (infinite) VIF values for 'EJ_A' and 'EJ_B' are due to these variables being perfectly multicollinear
# (since they were one-hot encoded from the same original variable).
# Since multicollinearity doesn't necessarily harm the model's predictive power, we'll keep it fro now
# introducing the Greeks
# Load 'greeks.csv'
greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
# Merge 'train' and 'greeks'
train_merged = pd.merge(train, greeks, on="Id")
train_merged.head()
# done some exploration in Stata of the relationship between the features in 'greeks.csv' and the target variable 'Class'.
# The results suggest that 'Alpha', 'Beta', and 'Gamma' are perfect predictors of the target variable,
# which is why logistic regression is failing (it can't deal with perfect predictors).
# On the other side, 'Delta' and 'EJ' do seem to have a significant relationship with 'Class'.
# The odds ratio for 'Delta' and 'EJ' is greater than 1, which means that as these features increase,
# the odds of 'Class' being 1 (having the condition) also increases.
# include these features in the model and start over
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from xgboost import XGBClassifier
# Load greeks data
greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
# Merge 'train' and 'greeks'
train = pd.merge(train, greeks, on="Id")
# Convert 'EJ' to one-hot encoding
train = pd.get_dummies(train, columns=["EJ"])
# Define feature columns
feature_cols = train.drop(["Id", "Class"], axis=1).columns
# Define target column
target_col = "Class"
# Create the preprocessing pipelines for both numeric and categorical data
numeric_features = (
train[feature_cols].select_dtypes(include=["int64", "float64"]).columns
)
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler())]
)
categorical_features = train[feature_cols].select_dtypes(include=["object"]).columns
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features),
]
)
# Append classifier to preprocessing pipeline
clf = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", XGBClassifier(use_label_encoder=False, eval_metric="logloss")),
]
)
# Split data into features and target
X_train = train[feature_cols]
y_train = train[target_col]
# Fit the model
clf.fit(X_train, y_train)
# Create a DataFrame for the probabilities
probabilities = pd.DataFrame(
clf.predict_proba(test[feature_cols]), columns=[f"class_{i}" for i in range(2)]
)
# Concatenate the test IDs with their associated probabilities
submission = pd.concat([test["Id"], probabilities], axis=1)
# Save the DataFrame to a csv file
submission.to_csv("submission.csv", index=False)
submission.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/772/129772682.ipynb
| null | null |
[{"Id": 129772682, "ScriptId": 38583114, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 381948, "CreationDate": "05/16/2023 11:15:03", "VersionNumber": 4.0, "Title": "age-related condition", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 272.0, "LinesInsertedFromPrevious": 184.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 88.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk(
"/kaggle/input/icr-identify-age-related-conditions"
):
for filename in filenames:
print(os.path.join(dirname, filename))
import numpy as np
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from xgboost import XGBClassifier
# Load data
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
test = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
# Convert 'EJ' to one-hot encoding
train = pd.get_dummies(train, columns=["EJ"])
test = pd.get_dummies(test, columns=["EJ"])
# If test data EJ has only one category, we ensure to have the same structure as in train set
if "EJ_B" not in test.columns:
test["EJ_B"] = 0
# Define feature columns
feature_cols = train.drop(["Id", "Class"], axis=1).columns
# Define target column
target_col = "Class"
# Create the preprocessing pipelines for both numeric and categorical data
numeric_features = (
train[feature_cols].select_dtypes(include=["int64", "float64"]).columns
)
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler())]
)
categorical_features = train[feature_cols].select_dtypes(include=["object"]).columns
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features),
]
)
# Append classifier to preprocessing pipeline
clf = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", XGBClassifier(use_label_encoder=False, eval_metric="logloss")),
]
)
# Split data into features and target
X_train = train[feature_cols]
y_train = train[target_col]
# Fit the model
clf.fit(X_train, y_train)
# Create a DataFrame for the probabilities
probabilities = pd.DataFrame(
clf.predict_proba(test[feature_cols]), columns=[f"class_{i}" for i in range(2)]
)
# Concatenate the test IDs with their associated probabilities
submission = pd.concat([test["Id"], probabilities], axis=1)
# Save the DataFrame to a csv file
submission.to_csv("submission.csv", index=False)
submission.head()
# TESTING THE MODEL (no submit!)
from sklearn.metrics import log_loss
import xgboost as xgb
model_xgb = xgb.XGBClassifier(use_label_encoder=False, eval_metric="logloss")
# Fit the model with the training data
model_xgb.fit(X_train, y_train)
# Get probabilities instead of predicted labels, since log loss is a probabilistic metric
y_train_proba = model_xgb.predict_proba(X_train)
y_val_proba = model_xgb.predict_proba(X_val)
# Calculate log loss for training and validation sets
train_log_loss = log_loss(y_train, y_train_proba)
val_log_loss = log_loss(y_val, y_val_proba)
print(f"Train Log Loss: {train_log_loss}")
print(f"Validation Log Loss: {val_log_loss}")
# test the statistical significance of the parameters using Logistic Regression
import statsmodels.api as sm
# Preprocess the data
X_train_preprocessed = preprocessor.fit_transform(X_train)
# Add a constant to the features
X_train_with_constant = sm.add_constant(X_train_preprocessed)
# Fit the logistic regression model
model = sm.Logit(y_train, X_train_with_constant)
result = model.fit()
# Print the summary
print(result.summary())
# use cross-validation to find the best value of lambda (via LassoCV),
# fit a Lasso model, and get the absolute values of the coefficients
from sklearn.linear_model import LassoCV
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
# Define your preprocessing pipeline
preprocessing = make_pipeline(SimpleImputer(strategy="mean"), StandardScaler())
# Apply preprocessing
X_train_preprocessed = preprocessing.fit_transform(X_train)
X_val_preprocessed = preprocessing.transform(X_val)
# Initialize a LassoCV model
lasso = LassoCV(cv=5)
# Fit the LassoCV model on the preprocessed data
lasso.fit(X_train_preprocessed, y_train)
# Get the feature importance (the coefficients of each feature)
importance = np.abs(lasso.coef_)
# Get the features selected by Lasso (features with non-zero coefficients)
features_selected = X_train.columns[importance > 0]
print("Features selected by Lasso:")
print(features_selected)
# Identify multicollinearity by computing the Variance Inflation Factor (VIF) for each feature in the model
import pandas as pd
import numpy as np
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
# Load data
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
# Convert 'EJ' to one-hot encoding
train = pd.get_dummies(train, columns=["EJ"])
# If train data EJ has only one category, we ensure to have the same structure as in train set
if "EJ_B" not in train.columns:
train["EJ_B"] = 0
# Define feature columns
feature_cols = train.drop(["Id", "Class"], axis=1).columns
# Preprocess the data
preprocessing = make_pipeline(SimpleImputer(strategy="mean"), StandardScaler())
X = preprocessing.fit_transform(train[feature_cols])
# Compute VIF for each feature
vif = pd.DataFrame()
vif["features"] = feature_cols
vif["VIF"] = [variance_inflation_factor(X, i) for i in range(X.shape[1])]
print(vif)
# From the output, we see that variables 'DV', 'EH', 'FD', 'GL', 'EJ_A', and 'EJ_B' have a high
# variance inflation factor (VIF > 8), suggesting a high level of multicollinearity.
# The 'inf' (infinite) VIF values for 'EJ_A' and 'EJ_B' are due to these variables being perfectly multicollinear
# (since they were one-hot encoded from the same original variable).
# Since multicollinearity doesn't necessarily harm the model's predictive power, we'll keep it fro now
# introducing the Greeks
# Load 'greeks.csv'
greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
# Merge 'train' and 'greeks'
train_merged = pd.merge(train, greeks, on="Id")
train_merged.head()
# done some exploration in Stata of the relationship between the features in 'greeks.csv' and the target variable 'Class'.
# The results suggest that 'Alpha', 'Beta', and 'Gamma' are perfect predictors of the target variable,
# which is why logistic regression is failing (it can't deal with perfect predictors).
# On the other side, 'Delta' and 'EJ' do seem to have a significant relationship with 'Class'.
# The odds ratio for 'Delta' and 'EJ' is greater than 1, which means that as these features increase,
# the odds of 'Class' being 1 (having the condition) also increases.
# include these features in the model and start over
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from xgboost import XGBClassifier
# Load greeks data
greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
# Merge 'train' and 'greeks'
train = pd.merge(train, greeks, on="Id")
# Convert 'EJ' to one-hot encoding
train = pd.get_dummies(train, columns=["EJ"])
# Define feature columns
feature_cols = train.drop(["Id", "Class"], axis=1).columns
# Define target column
target_col = "Class"
# Create the preprocessing pipelines for both numeric and categorical data
numeric_features = (
train[feature_cols].select_dtypes(include=["int64", "float64"]).columns
)
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="mean")), ("scaler", StandardScaler())]
)
categorical_features = train[feature_cols].select_dtypes(include=["object"]).columns
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features),
]
)
# Append classifier to preprocessing pipeline
clf = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", XGBClassifier(use_label_encoder=False, eval_metric="logloss")),
]
)
# Split data into features and target
X_train = train[feature_cols]
y_train = train[target_col]
# Fit the model
clf.fit(X_train, y_train)
# Create a DataFrame for the probabilities
probabilities = pd.DataFrame(
clf.predict_proba(test[feature_cols]), columns=[f"class_{i}" for i in range(2)]
)
# Concatenate the test IDs with their associated probabilities
submission = pd.concat([test["Id"], probabilities], axis=1)
# Save the DataFrame to a csv file
submission.to_csv("submission.csv", index=False)
submission.head()
| false | 0 | 2,618 | 0 | 2,618 | 2,618 |
||
129772732
|
<jupyter_start><jupyter_text>Copper Mining Company - Stock Price Prediction
"The 'KGHM Dataset' is a meticulously curated collection of financial and economic data specifically designed for the purpose of stock price prediction for KGHM, a leading copper mining company. This dataset encompasses a wide range of features including historical prices, macroeconomic indicators, industry-related metrics, company-specific financials, and technical indicators. The dataset comprises 59 carefully selected features that have the potential to influence the stock price of KGHM. The data has been sourced from reputable platforms such as Yahoo Finance, Wikipedia, and the official website of KGHM. The dataset has undergone rigorous pre-processing and feature engineering to ensure data quality and relevance for the machine learning models used in the stock price prediction analysis. It serves as a valuable resource for conducting in-depth analysis and developing accurate predictive models for KGHM's stock price movements."
Kaggle dataset identifier: cooper-mining-company-stock-price-prediction
<jupyter_script>import os
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
import os
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn.model_selection import GridSearchCV
import numpy as np
import datetime
import sklearn.metrics as metrics
#!pip install yfinance
import yfinance as yf
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
ticker = "KGH.WA"
kurs = 4.21 # USD/PLN
currency = "USD"
# IMPORTANT READ BEFORE SETUP
# startdate you set will be x-14 (reason behind is that there's no access to
# lagged 14 adj closing price, that why first 14 records are being dropped)
startdate = "2017-03-01"
# IMPORTANT READ BEFORE SETUP
# enddate you set will be x-21 (reason behind it is that last 21 days, will have no target variable)
# target variables predicted for the last 21 days will also be predicition for next 21 days.
# so for example if you set enddate to 2023-01-01, you will be able to predict from
# 2023-01-02 till 2023-01-23 (21 days)
enddate = "2023-03-10"
model_version = "1.0"
data_version = "v1"
shift_back_num = -21
data = yf.download(ticker, startdate, enddate)
# # ***Introduction***
# Hello everyone! Welcome to this Kaggle notebook where we will be exploring the fascinating world of stock price prediction. Predicting stock prices has always been an attractive topic to both newbies and experienced traders in the financial industry. In this notebook, we will dive into the complexities of financial market data, and we will attempt to make sense of it using Machine Learning.
# We will start by importing and cleaning our data, which is a critical step in any data science project. The dataset we'll be using contains daily stock prices, with features like Open, High, Low, Close prices, and Volume of transactions. Our target variable, which we'll aim to predict, is the 'target_class' column.
# "target_class" contains 4 values:
# "up_more_5%"
# "down_more_5%"
# "up_less_5%"
# "down_less_5%"
# In this notebook I will try to predict target label for KGHM price 21 into the future. As data I gathered is between 2017-03-01 and 2023-03-10 we will be able to compare our predictions to actual KGHM price.
# I will explain and show how I collected the data from the scratch, including cleaning process, but you can access final dataset here : https://www.kaggle.com/datasets/maciejgronczynski/cooper-mining-company-stock-price-prediction
# First we will start with defining few functions, that we will need in next steps:
# The **exchange_rates** function retrieves historical exchange rate data for a selection of currencies (EUR, JPY, CNY, and PLN) relative to USD. It requires two date parameters: startdate and enddate to specify the data collection period.
# The function uses the yfinance library to download data from Yahoo Finance, keeping only the adjusted close price for each day. Each currency's data is stored in a .csv file and combined into a single DataFrame, with each column representing a different currency pair's daily exchange rate.
# Finally, the DataFrame is returned, providing a consolidated view of exchange rate trends over the specified time period.
def exchange_rates(startdate, enddate):
ex_rates = ["EURUSD=X", "JPY=X", "CNY=X", "PLN=X"]
final_data = pd.DataFrame()
for ticker in ex_rates:
data = yf.download(ticker, startdate, enddate)
data = data.drop(["Open", "High", "Low", "Close", "Volume"], axis=1)
name = ticker + "_price"
data.rename(columns={"Adj Close": name}, inplace=True)
data.to_csv("dissertation_data.csv")
# merge dataframes using pd.concat()
final_data = pd.concat([final_data, data[name]], axis=1)
final_data = final_data.rename_axis("Date")
final_data.index = pd.to_datetime(final_data.index)
return final_data
#
# The **convert_volume_to_number** function takes as an input a string that represents a volume of stocks, potentially ending in the letter 'M' to denote 'million'.
# This function checks whether the last character of the input string is 'M'. If it is, the function removes the 'M' and converts the rest of the string to a float, then multiplies it by 1,000,000 to get the volume in numeric form. This allows us to handle stock volumes given in millions in a numerical format.
# If the last character is not 'M', the function simply converts the string to an integer. This is done under the assumption that if the volume is not denoted in millions, it is an exact numeric representation.
# In both cases, the function returns the volume as an integer.
def convert_volume_to_number(volume_str):
if volume_str[-1] == "M":
return int(float(volume_str[:-1]) * 1000000)
else:
return int(volume_str)
#
# The **classify_movement** function is used to categorize changes in stock prices between the current and future periods. This function takes two arguments: current, which represents the current stock price, and future, which is the stock price at a future time.
# The function returns an integer representing one of four categories based on the percentage change in stock price:
# If the future price is more than 5% higher than the current price, the function returns 0, indicating a significant price increase.
# If the future price is more than 5% lower than the current price, the function returns 1, indicating a significant price decrease.
# If the future price is higher but less than 5% above the current price, the function returns 2, indicating a minor price increase.
# If none of the above conditions are met, the function returns 3, implying a minor price decrease (less than 5%).
def classify_movement(current, future):
if future > current * 1.05: # Up more than 5%
return 0
elif future < current * 0.95: # Down more than 5%
return 1
elif future > current: # Up less than 5%
return 2
else: # Down less than 5%
return 3
# In addition to the data we'll be downloading directly via the yfinance library, we've also manually web scraped some crucial economic indicators that are not readily available on Yahoo Finance. These indicators include the inflation rate, interest rate, and the M3 money supply for Poland.
inflation_dict = {
# 2017
"2017-03-01": 2,
"2017-04-01": 2,
"2017-05-01": 1.9,
"2017-06-01": 1.5,
"2017-07-01": 1.7,
"2017-08-01": 1.8,
"2017-09-01": 2.2,
"2017-10-01": 2.1,
"2017-11-01": 2.5,
"2017-12-01": 2.1,
# 2018
"2018-01-01": 1.9,
"2018-02-01": 1.4,
"2018-03-01": 1.3,
"2018-04-01": 1.6,
"2018-05-01": 1.7,
"2018-06-01": 2,
"2018-07-01": 2,
"2018-08-01": 2,
"2018-09-01": 1.9,
"2018-10-01": 1.7,
"2018-11-01": 1.3,
"2018-12-01": 1.1,
# 2019
"2019-01-01": 0.7,
"2019-02-01": 1.2,
"2019-03-01": 1.7,
"2019-04-01": 2.2,
"2019-05-01": 2.4,
"2019-06-01": 2.6,
"2019-07-01": 2.9,
"2019-08-01": 2.9,
"2019-09-01": 2.6,
"2019-10-01": 2.5,
"2019-11-01": 2.6,
"2019-12-01": 3.4,
# 2020
"2020-01-01": 4.4,
"2020-02-01": 4.7,
"2020-03-01": 4.6,
"2020-04-01": 3.4,
"2020-05-01": 2.9,
"2020-06-01": 3.3,
"2020-07-01": 3,
"2020-08-01": 2.9,
"2020-09-01": 3.2,
"2020-10-01": 3.1,
"2020-11-01": 3,
"2020-12-01": 2.4,
# 2021
"2021-01-01": 2.6,
"2021-02-01": 2.4,
"2021-03-01": 3.2,
"2021-04-01": 4.3,
"2021-05-01": 4.7,
"2021-06-01": 4.4,
"2021-07-01": 5,
"2021-08-01": 5.5,
"2021-09-01": 5.9,
"2021-10-01": 6.8,
"2021-11-01": 7.8,
"2021-12-01": 8.6,
# 2022
"2022-01-01": 9.4,
"2022-02-01": 8.5,
"2022-03-01": 11,
"2022-04-01": 12.3,
"2022-05-01": 13.9,
"2022-06-01": 15.5,
"2022-07-01": 15.6,
"2022-08-01": 16.1,
"2022-09-01": 17.2,
"2022-10-01": 17.9,
"2022-11-01": 17.5,
"2022-12-01": 16.6,
# 2023
"2023-01-01": 16.6,
"2023-02-01": 18.4,
"2023-03-01": 16.2,
}
interest_dict = {
# 2017
"2017-03-01": 1.5,
"2017-04-01": 1.5,
"2017-05-01": 1.5,
"2017-06-01": 1.5,
"2017-07-01": 1.5,
"2017-08-01": 1.5,
"2017-09-01": 1.5,
"2017-10-01": 1.5,
"2017-11-01": 1.5,
"2017-12-01": 1.5,
# 2018
"2018-01-01": 1.5,
"2018-02-01": 1.5,
"2018-03-01": 1.5,
"2018-04-01": 1.5,
"2018-05-01": 1.5,
"2018-06-01": 1.5,
"2018-07-01": 1.5,
"2018-08-01": 1.5,
"2018-09-01": 1.5,
"2018-10-01": 1.5,
"2018-11-01": 1.5,
"2018-12-01": 1.5,
# 2019
"2019-01-01": 1.5,
"2019-02-01": 1.5,
"2019-03-01": 1.5,
"2019-04-01": 1.5,
"2019-05-01": 1.5,
"2019-06-01": 1.5,
"2019-07-01": 1.5,
"2019-08-01": 1.5,
"2019-09-01": 1.5,
"2019-10-01": 1.5,
"2019-11-01": 1.5,
"2019-12-01": 1.5,
# 2020
"2020-01-01": 1.5,
"2020-02-01": 1.5,
"2020-03-01": 1,
"2020-04-01": 0.5,
"2020-05-01": 0.1,
"2020-06-01": 0.1,
"2020-07-01": 0.1,
"2020-08-01": 0.1,
"2020-09-01": 0.1,
"2020-10-01": 0.1,
"2020-11-01": 0.1,
"2020-12-01": 0.1,
# 2021
"2021-01-01": 0.1,
"2021-02-01": 0.1,
"2021-03-01": 0.1,
"2021-04-01": 0.1,
"2021-05-01": 0.1,
"2021-06-01": 0.1,
"2021-07-01": 0.1,
"2021-08-01": 0.1,
"2021-09-01": 0.1,
"2021-10-01": 0.5,
"2021-11-01": 1.25,
"2021-12-01": 1.75,
# 2022
"2022-01-01": 2.25,
"2022-02-01": 2.75,
"2022-03-01": 3.5,
"2022-04-01": 4.5,
"2022-05-01": 5.25,
"2022-06-01": 6,
"2022-07-01": 6.5,
"2022-08-01": 6.75,
"2022-09-01": 6.75,
"2022-10-01": 6.75,
"2022-11-01": 6.75,
"2022-12-01": 6.75,
# 2023
"2023-01-01": 6.75,
"2023-02-01": 6.75,
"2023-03-01": 6.75,
}
m3poland_dict = {
"2023-01-01": 2091314.88,
"2023-02-01": 2131400.00,
"2023-03-01": 2131400.00,
# 2022
"2022-01-01": 1985020.62,
"2022-02-01": 1985020.62,
"2022-03-01": 2009566.25,
"2022-04-01": 2009566.25,
"2022-05-01": 2009566.25,
"2022-06-01": 1998843.50,
"2022-07-01": 1998843.50,
"2022-08-01": 1998843.50,
"2022-09-01": 2062092.75,
"2022-10-01": 2062092.75,
"2022-11-01": 2062092.75,
"2022-12-01": 2091314.88,
# 2021
"2021-01-01": 1822650.12,
"2021-02-01": 1822650.12,
"2021-03-01": 1862487.75,
"2021-04-01": 1862487.75,
"2021-05-01": 1862487.75,
"2021-06-01": 1876000.62,
"2021-07-01": 1876000.62,
"2021-08-01": 1876000.62,
"2021-09-01": 1985020.62,
"2021-10-01": 1985020.62,
"2021-11-01": 1985020.62,
"2021-12-01": 1985020.62,
# 2020
"2020-01-01": 1565639.75,
"2020-02-01": 1565639.75,
"2020-03-01": 1628423.38,
"2020-04-01": 1628423.38,
"2020-05-01": 1628423.38,
"2020-06-01": 1746224.75,
"2020-07-01": 1746224.75,
"2020-08-01": 1746224.75,
"2020-09-01": 1762175.62,
"2020-10-01": 1762175.62,
"2020-11-01": 1762175.62,
"2020-12-01": 1822650.12,
# 2019
"2019-01-01": 1446093.38,
"2019-02-01": 1446093.38,
"2019-03-01": 1457187.12,
"2019-04-01": 1457187.12,
"2019-05-01": 1457187.12,
"2019-06-01": 1478217.75,
"2019-07-01": 1478217.75,
"2019-08-01": 1478217.75,
"2019-09-01": 1506171.25,
"2019-10-01": 1506171.25,
"2019-11-01": 1506171.25,
"2019-12-01": 1565639.75,
# 2018
"2018-01-01": 1324383.25,
"2018-02-01": 1324383.25,
"2018-03-01": 1325795.62,
"2018-04-01": 1325795.62,
"2018-05-01": 1325795.62,
"2018-06-01": 1352491.88,
"2018-07-01": 1352491.88,
"2018-08-01": 1352491.88,
"2018-09-01": 1376164.75,
"2018-10-01": 1376164.75,
"2018-11-01": 1376164.75,
"2018-12-01": 1446093.38,
# 2017
"2017-03-01": 1261178.12,
"2017-04-01": 1261178.12,
"2017-05-01": 1261178.12,
"2017-06-01": 1261178.12,
"2017-07-01": 1261178.12,
"2017-08-01": 1261178.12,
"2017-09-01": 1275942.38,
"2017-10-01": 1275942.38,
"2017-11-01": 1275942.38,
"2017-12-01": 1324383.25,
}
def clean_data(data):
last_index = data.iloc[-1].name
print(
"Last available date in data is:",
last_index,
"Live Prediction will be on that date.",
)
data[["Adj Close"]] = data[["Adj Close"]].div(kurs)
copper = yf.download("HG=F", start=startdate, end=enddate)
silver = yf.download("SI=F", start=startdate, end=enddate)
gold = yf.download("GLD", start=startdate, end=enddate)
sp500 = yf.download("^GSPC", start=startdate, end=enddate)
DJIA = yf.download("^DJI", start=startdate, end=enddate)
NASDAQ_Composite = yf.download("^IXIC", start=startdate, end=enddate)
FTSE_100 = yf.download("^FTSE", start=startdate, end=enddate)
DAX = yf.download("^GDAXI", start=startdate, end=enddate)
CAC_40 = yf.download("^FCHI", start=startdate, end=enddate)
NIKKEI_225 = yf.download("^N225", start=startdate, end=enddate)
SHANGHAI_Composite = yf.download("000001.SS", start=startdate, end=enddate)
Hang_Seng_Index = yf.download("^HSI", start=startdate, end=enddate)
BSE_Sensex = yf.download("^BSESN", start=startdate, end=enddate)
ASX_200 = yf.download("^AXJO", start=startdate, end=enddate)
GMMP_ETF = yf.download("PICK", start=startdate, end=enddate)
RESM_ETF = yf.download("REMX", start=startdate, end=enddate)
copper = pd.DataFrame(copper["Adj Close"])
silver = pd.DataFrame(silver["Adj Close"])
gold = pd.DataFrame(gold["Adj Close"])
rates = exchange_rates(startdate, enddate)
sp500 = pd.DataFrame(sp500["Adj Close"])
DJIA = pd.DataFrame(DJIA["Adj Close"])
NASDAQ_Composite = pd.DataFrame(NASDAQ_Composite["Adj Close"])
FTSE_100 = pd.DataFrame(FTSE_100["Adj Close"])
DAX = pd.DataFrame(DAX["Adj Close"])
CAC_40 = pd.DataFrame(CAC_40["Adj Close"])
NIKKEI_225 = pd.DataFrame(NIKKEI_225["Adj Close"])
SHANGHAI_Composite = pd.DataFrame(SHANGHAI_Composite["Adj Close"])
Hang_Seng_Index = pd.DataFrame(Hang_Seng_Index["Adj Close"])
BSE_Sensex = pd.DataFrame(BSE_Sensex["Adj Close"])
ASX_200 = pd.DataFrame(ASX_200["Adj Close"])
GMMP_ETF = pd.DataFrame(GMMP_ETF["Adj Close"])
RESM_ETF = pd.DataFrame(RESM_ETF["Adj Close"])
# Rename the "Adj Close" column to "copper_price" and "silver_price"
copper.rename(columns={"Adj Close": "copper_price"}, inplace=True)
silver.rename(columns={"Adj Close": "silver_price"}, inplace=True)
gold.rename(columns={"Adj Close": "gold_price"}, inplace=True)
sp500.rename(columns={"Adj Close": "sp500_price"}, inplace=True)
DJIA.rename(columns={"Adj Close": "DJIA_price"}, inplace=True)
NASDAQ_Composite.rename(
columns={"Adj Close": "NASDAQ_Composite_price"}, inplace=True
)
FTSE_100.rename(columns={"Adj Close": "FTSE_100_price"}, inplace=True)
DAX.rename(columns={"Adj Close": "DAX_price"}, inplace=True)
CAC_40.rename(columns={"Adj Close": "CAC_40_price"}, inplace=True)
NIKKEI_225.rename(columns={"Adj Close": "NIKKEI_225_price"}, inplace=True)
SHANGHAI_Composite.rename(
columns={"Adj Close": "SHANGHAI_Composite_price"}, inplace=True
)
Hang_Seng_Index.rename(columns={"Adj Close": "Hang_Seng_Index_price"}, inplace=True)
BSE_Sensex.rename(columns={"Adj Close": "BSE_Sensex_price"}, inplace=True)
ASX_200.rename(columns={"Adj Close": "ASX_200_price"}, inplace=True)
GMMP_ETF.rename(columns={"Adj Close": "GMMP_ETF_price"}, inplace=True)
RESM_ETF.rename(columns={"Adj Close": "RESM_ETF_price"}, inplace=True)
data = pd.merge(data, copper, how="left", on="Date")
data = pd.merge(data, silver, how="left", on="Date")
data = pd.merge(data, gold, how="left", on="Date")
data = pd.merge(data, rates, how="left", on="Date")
data = pd.merge(data, sp500, how="left", on="Date")
data = pd.merge(data, DJIA, how="left", on="Date")
data = pd.merge(data, NASDAQ_Composite, how="left", on="Date")
data = pd.merge(data, FTSE_100, how="left", on="Date")
data = pd.merge(data, DAX, how="left", on="Date")
data = pd.merge(data, CAC_40, how="left", on="Date")
data = pd.merge(data, NIKKEI_225, how="left", on="Date")
data = pd.merge(data, SHANGHAI_Composite, how="left", on="Date")
data = pd.merge(data, Hang_Seng_Index, how="left", on="Date")
data = pd.merge(data, BSE_Sensex, how="left", on="Date")
data = pd.merge(data, ASX_200, how="left", on="Date")
data = pd.merge(data, GMMP_ETF, how="left", on="Date")
data = pd.merge(data, RESM_ETF, how="left", on="Date")
wig20 = pd.read_csv("/dissertation_data/WIG20_historical_data.csv", index_col=0)
wig20["WIG20_volume"] = wig20["WIG20_volume"].apply(convert_volume_to_number)
wig20 = wig20.reset_index()
wig20["Date"] = pd.to_datetime(wig20["Date"], format="%d/%m/%Y").dt.strftime(
"%Y-%m-%d %H:%M:%S"
)
wig20.set_index("Date", inplace=True)
wig20.index = pd.to_datetime(wig20.index)
data = pd.merge(data, wig20, how="left", on="Date")
data = data.drop(["Open", "High", "Low", "Close"], axis=1)
# create a new column with month and year only
data["month_year"] = pd.to_datetime(data.index.strftime("%Y-%m"))
# loop through the inflation_dict and assign inflation rate to corresponding month
for date_str, rate in inflation_dict.items():
date = datetime.datetime.strptime(date_str, "%Y-%m-%d")
mask = data["month_year"] == date.replace(day=1)
data.loc[mask, "inflation_rate"] = rate
# loop through the intrest_dict and assign inflation rate to corresponding month
for date_str, rate in interest_dict.items():
date = datetime.datetime.strptime(date_str, "%Y-%m-%d")
mask = data["month_year"] == date.replace(day=1)
data.loc[mask, "interest_rate"] = rate
# loop through the intrest_dict and assign inflation rate to corresponding month
for date_str, rate in m3poland_dict.items():
date = datetime.datetime.strptime(date_str, "%Y-%m-%d")
mask = data["month_year"] == date.replace(day=1)
data.loc[mask, "M3_rate"] = rate
data[["M3_rate"]] = data[["M3_rate"]].div(kurs)
data["ma14"] = data["Adj Close"].rolling(window=14).mean()
data["ma50"] = data["Adj Close"].rolling(window=50).mean()
data["ma100"] = data["Adj Close"].rolling(window=100).mean()
data["ma200"] = data["Adj Close"].rolling(window=200).mean()
data["copper_price"] = data["copper_price"].ffill()
data["silver_price"] = data["silver_price"].ffill()
data["gold_price"] = data["gold_price"].ffill()
data["sp500_price"] = data["sp500_price"].ffill()
data["DJIA_price"] = data["DJIA_price"].ffill()
data["NASDAQ_Composite_price"] = data["NASDAQ_Composite_price"].ffill()
data["FTSE_100_price"] = data["FTSE_100_price"].ffill()
data["DAX_price"] = data["DAX_price"].ffill()
data["CAC_40_price"] = data["CAC_40_price"].ffill()
data["NIKKEI_225_price"] = data["NIKKEI_225_price"].ffill()
data["SHANGHAI_Composite_price"] = data["SHANGHAI_Composite_price"].ffill()
data["Hang_Seng_Index_price"] = data["Hang_Seng_Index_price"].ffill()
data["BSE_Sensex_price"] = data["BSE_Sensex_price"].ffill()
data["ASX_200_price"] = data["ASX_200_price"].ffill()
data["GMMP_ETF_price"] = data["GMMP_ETF_price"].ffill()
data["RESM_ETF_price"] = data["RESM_ETF_price"].ffill()
data["EURUSD=X_price"] = data["EURUSD=X_price"].ffill()
data["JPY=X_price"] = data["JPY=X_price"].ffill()
data["CNY=X_price"] = data["CNY=X_price"].ffill()
data["PLN=X_price"] = data["PLN=X_price"].ffill()
data["WIG20_price"] = data["WIG20_price"].ffill()
data["WIG20_volume"] = data["WIG20_volume"].ffill()
data["WIG20_change"] = data["WIG20_change"].ffill()
# convert WIG20_price to float
data["WIG20_price"] = data["WIG20_price"].str.replace(",", "").astype(float)
# convert WIG20_change to float
data["WIG20_change"] = data["WIG20_change"].str.replace("%", "").astype(float)
ma14_mean = data["ma14"].mean()
data["ma14"].fillna(ma14_mean, inplace=True)
ma50_mean = data["ma50"].mean()
data["ma50"].fillna(ma50_mean, inplace=True)
ma100_mean = data["ma100"].mean()
data["ma100"].fillna(ma100_mean, inplace=True)
ma200_mean = data["ma200"].mean()
data["ma200"].fillna(ma200_mean, inplace=True)
financial_results = pd.read_csv(
cwd + "/dissertation_data/financial_results.csv", index_col=0
)
financial_results.index = pd.to_datetime(financial_results.index)
financial_results["month_year"] = financial_results.index.strftime("%Y-%m")
financial_results["month_year"] = pd.to_datetime(financial_results["month_year"])
# create new 'dates' column
data["dates"] = data.index
# merge the two dataframes based on the month_year column
data = pd.merge(data, financial_results, on="month_year")
# Extract month from month_year column
data["month"] = data["month_year"].dt.month
# Create quarters column
data["quarters"] = data["month"].apply(lambda x: (x - 1) // 3 + 1)
data = data.drop(["month_year", "month"], axis=1)
data["target"] = data["Adj Close"].shift(
shift_back_num
) # Shift the close price 21 days up
# List of features to create lagged values for
features = ["Adj Close"]
# Add lagged values for each feature
for feature in features:
for i in range(1, 15):
data[f"{feature}_lag_{i}"] = data[feature].shift(i)
# current #future
data["target_class"] = list(
map(classify_movement, data["Adj Close"], data["target"])
)
# Get all rows with NaN target value
last_21_records = data[data["target"].isna()]
new_startdate = data["dates"].iloc[0]
new_enddate = data["dates"].iloc[-1]
print(
"\n\n\n***IMPORTANT INFORMATION***\n there is new startdate and endate due to cleaning process..."
)
print("NEW STARTDATE: ", new_startdate)
print("NEW ENDDATE: ", new_enddate)
data.dropna(inplace=True)
data = data.drop(["dates", "target"], axis=1)
last_21_records = last_21_records.drop(["target", "target_class"], axis=1)
# Create the directory if it doesn't exist
if not os.path.exists(model_version_folder_path):
os.makedirs(model_version_folder_path)
data.to_csv("/dissertation_data/kghm_" + data_version + ".csv")
last_21_records.to_csv(
"/dissertation_data/kghm_validation_" + data_version + ".csv"
)
return data, last_index, new_startdate, new_enddate, last_21_records
data, last_index, new_startdate, new_enddate, last_21_records = clean_data(data)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/772/129772732.ipynb
|
cooper-mining-company-stock-price-prediction
|
maciejgronczynski
|
[{"Id": 129772732, "ScriptId": 38592772, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5276846, "CreationDate": "05/16/2023 11:15:28", "VersionNumber": 1.0, "Title": "KGHM Stock Price Prediction 21 days in future.80%", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 455.0, "LinesInsertedFromPrevious": 455.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186134091, "KernelVersionId": 129772732, "SourceDatasetVersionId": 5697920}]
|
[{"Id": 5697920, "DatasetId": 3275598, "DatasourceVersionId": 5773564, "CreatorUserId": 5276846, "LicenseName": "Other (specified in description)", "CreationDate": "05/16/2023 11:14:27", "VersionNumber": 2.0, "Title": "Copper Mining Company - Stock Price Prediction", "Slug": "cooper-mining-company-stock-price-prediction", "Subtitle": "KGHM Dataset: A Comprehensive Financial and Economic Data Collection for Stock", "Description": "\"The 'KGHM Dataset' is a meticulously curated collection of financial and economic data specifically designed for the purpose of stock price prediction for KGHM, a leading copper mining company. This dataset encompasses a wide range of features including historical prices, macroeconomic indicators, industry-related metrics, company-specific financials, and technical indicators. The dataset comprises 59 carefully selected features that have the potential to influence the stock price of KGHM. The data has been sourced from reputable platforms such as Yahoo Finance, Wikipedia, and the official website of KGHM. The dataset has undergone rigorous pre-processing and feature engineering to ensure data quality and relevance for the machine learning models used in the stock price prediction analysis. It serves as a valuable resource for conducting in-depth analysis and developing accurate predictive models for KGHM's stock price movements.\"", "VersionNotes": "Data Update 2023-05-16", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3275598, "CreatorUserId": 5276846, "OwnerUserId": 5276846.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5697920.0, "CurrentDatasourceVersionId": 5773564.0, "ForumId": 3341273, "Type": 2, "CreationDate": "05/16/2023 08:42:48", "LastActivityDate": "05/16/2023", "TotalViews": 883, "TotalDownloads": 111, "TotalVotes": 1, "TotalKernels": 2}]
|
[{"Id": 5276846, "UserName": "maciejgronczynski", "DisplayName": "Maciej Gronczynski", "RegisterDate": "06/10/2020", "PerformanceTier": 2}]
|
import os
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
import os
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn.model_selection import GridSearchCV
import numpy as np
import datetime
import sklearn.metrics as metrics
#!pip install yfinance
import yfinance as yf
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
ticker = "KGH.WA"
kurs = 4.21 # USD/PLN
currency = "USD"
# IMPORTANT READ BEFORE SETUP
# startdate you set will be x-14 (reason behind is that there's no access to
# lagged 14 adj closing price, that why first 14 records are being dropped)
startdate = "2017-03-01"
# IMPORTANT READ BEFORE SETUP
# enddate you set will be x-21 (reason behind it is that last 21 days, will have no target variable)
# target variables predicted for the last 21 days will also be predicition for next 21 days.
# so for example if you set enddate to 2023-01-01, you will be able to predict from
# 2023-01-02 till 2023-01-23 (21 days)
enddate = "2023-03-10"
model_version = "1.0"
data_version = "v1"
shift_back_num = -21
data = yf.download(ticker, startdate, enddate)
# # ***Introduction***
# Hello everyone! Welcome to this Kaggle notebook where we will be exploring the fascinating world of stock price prediction. Predicting stock prices has always been an attractive topic to both newbies and experienced traders in the financial industry. In this notebook, we will dive into the complexities of financial market data, and we will attempt to make sense of it using Machine Learning.
# We will start by importing and cleaning our data, which is a critical step in any data science project. The dataset we'll be using contains daily stock prices, with features like Open, High, Low, Close prices, and Volume of transactions. Our target variable, which we'll aim to predict, is the 'target_class' column.
# "target_class" contains 4 values:
# "up_more_5%"
# "down_more_5%"
# "up_less_5%"
# "down_less_5%"
# In this notebook I will try to predict target label for KGHM price 21 into the future. As data I gathered is between 2017-03-01 and 2023-03-10 we will be able to compare our predictions to actual KGHM price.
# I will explain and show how I collected the data from the scratch, including cleaning process, but you can access final dataset here : https://www.kaggle.com/datasets/maciejgronczynski/cooper-mining-company-stock-price-prediction
# First we will start with defining few functions, that we will need in next steps:
# The **exchange_rates** function retrieves historical exchange rate data for a selection of currencies (EUR, JPY, CNY, and PLN) relative to USD. It requires two date parameters: startdate and enddate to specify the data collection period.
# The function uses the yfinance library to download data from Yahoo Finance, keeping only the adjusted close price for each day. Each currency's data is stored in a .csv file and combined into a single DataFrame, with each column representing a different currency pair's daily exchange rate.
# Finally, the DataFrame is returned, providing a consolidated view of exchange rate trends over the specified time period.
def exchange_rates(startdate, enddate):
ex_rates = ["EURUSD=X", "JPY=X", "CNY=X", "PLN=X"]
final_data = pd.DataFrame()
for ticker in ex_rates:
data = yf.download(ticker, startdate, enddate)
data = data.drop(["Open", "High", "Low", "Close", "Volume"], axis=1)
name = ticker + "_price"
data.rename(columns={"Adj Close": name}, inplace=True)
data.to_csv("dissertation_data.csv")
# merge dataframes using pd.concat()
final_data = pd.concat([final_data, data[name]], axis=1)
final_data = final_data.rename_axis("Date")
final_data.index = pd.to_datetime(final_data.index)
return final_data
#
# The **convert_volume_to_number** function takes as an input a string that represents a volume of stocks, potentially ending in the letter 'M' to denote 'million'.
# This function checks whether the last character of the input string is 'M'. If it is, the function removes the 'M' and converts the rest of the string to a float, then multiplies it by 1,000,000 to get the volume in numeric form. This allows us to handle stock volumes given in millions in a numerical format.
# If the last character is not 'M', the function simply converts the string to an integer. This is done under the assumption that if the volume is not denoted in millions, it is an exact numeric representation.
# In both cases, the function returns the volume as an integer.
def convert_volume_to_number(volume_str):
if volume_str[-1] == "M":
return int(float(volume_str[:-1]) * 1000000)
else:
return int(volume_str)
#
# The **classify_movement** function is used to categorize changes in stock prices between the current and future periods. This function takes two arguments: current, which represents the current stock price, and future, which is the stock price at a future time.
# The function returns an integer representing one of four categories based on the percentage change in stock price:
# If the future price is more than 5% higher than the current price, the function returns 0, indicating a significant price increase.
# If the future price is more than 5% lower than the current price, the function returns 1, indicating a significant price decrease.
# If the future price is higher but less than 5% above the current price, the function returns 2, indicating a minor price increase.
# If none of the above conditions are met, the function returns 3, implying a minor price decrease (less than 5%).
def classify_movement(current, future):
if future > current * 1.05: # Up more than 5%
return 0
elif future < current * 0.95: # Down more than 5%
return 1
elif future > current: # Up less than 5%
return 2
else: # Down less than 5%
return 3
# In addition to the data we'll be downloading directly via the yfinance library, we've also manually web scraped some crucial economic indicators that are not readily available on Yahoo Finance. These indicators include the inflation rate, interest rate, and the M3 money supply for Poland.
inflation_dict = {
# 2017
"2017-03-01": 2,
"2017-04-01": 2,
"2017-05-01": 1.9,
"2017-06-01": 1.5,
"2017-07-01": 1.7,
"2017-08-01": 1.8,
"2017-09-01": 2.2,
"2017-10-01": 2.1,
"2017-11-01": 2.5,
"2017-12-01": 2.1,
# 2018
"2018-01-01": 1.9,
"2018-02-01": 1.4,
"2018-03-01": 1.3,
"2018-04-01": 1.6,
"2018-05-01": 1.7,
"2018-06-01": 2,
"2018-07-01": 2,
"2018-08-01": 2,
"2018-09-01": 1.9,
"2018-10-01": 1.7,
"2018-11-01": 1.3,
"2018-12-01": 1.1,
# 2019
"2019-01-01": 0.7,
"2019-02-01": 1.2,
"2019-03-01": 1.7,
"2019-04-01": 2.2,
"2019-05-01": 2.4,
"2019-06-01": 2.6,
"2019-07-01": 2.9,
"2019-08-01": 2.9,
"2019-09-01": 2.6,
"2019-10-01": 2.5,
"2019-11-01": 2.6,
"2019-12-01": 3.4,
# 2020
"2020-01-01": 4.4,
"2020-02-01": 4.7,
"2020-03-01": 4.6,
"2020-04-01": 3.4,
"2020-05-01": 2.9,
"2020-06-01": 3.3,
"2020-07-01": 3,
"2020-08-01": 2.9,
"2020-09-01": 3.2,
"2020-10-01": 3.1,
"2020-11-01": 3,
"2020-12-01": 2.4,
# 2021
"2021-01-01": 2.6,
"2021-02-01": 2.4,
"2021-03-01": 3.2,
"2021-04-01": 4.3,
"2021-05-01": 4.7,
"2021-06-01": 4.4,
"2021-07-01": 5,
"2021-08-01": 5.5,
"2021-09-01": 5.9,
"2021-10-01": 6.8,
"2021-11-01": 7.8,
"2021-12-01": 8.6,
# 2022
"2022-01-01": 9.4,
"2022-02-01": 8.5,
"2022-03-01": 11,
"2022-04-01": 12.3,
"2022-05-01": 13.9,
"2022-06-01": 15.5,
"2022-07-01": 15.6,
"2022-08-01": 16.1,
"2022-09-01": 17.2,
"2022-10-01": 17.9,
"2022-11-01": 17.5,
"2022-12-01": 16.6,
# 2023
"2023-01-01": 16.6,
"2023-02-01": 18.4,
"2023-03-01": 16.2,
}
interest_dict = {
# 2017
"2017-03-01": 1.5,
"2017-04-01": 1.5,
"2017-05-01": 1.5,
"2017-06-01": 1.5,
"2017-07-01": 1.5,
"2017-08-01": 1.5,
"2017-09-01": 1.5,
"2017-10-01": 1.5,
"2017-11-01": 1.5,
"2017-12-01": 1.5,
# 2018
"2018-01-01": 1.5,
"2018-02-01": 1.5,
"2018-03-01": 1.5,
"2018-04-01": 1.5,
"2018-05-01": 1.5,
"2018-06-01": 1.5,
"2018-07-01": 1.5,
"2018-08-01": 1.5,
"2018-09-01": 1.5,
"2018-10-01": 1.5,
"2018-11-01": 1.5,
"2018-12-01": 1.5,
# 2019
"2019-01-01": 1.5,
"2019-02-01": 1.5,
"2019-03-01": 1.5,
"2019-04-01": 1.5,
"2019-05-01": 1.5,
"2019-06-01": 1.5,
"2019-07-01": 1.5,
"2019-08-01": 1.5,
"2019-09-01": 1.5,
"2019-10-01": 1.5,
"2019-11-01": 1.5,
"2019-12-01": 1.5,
# 2020
"2020-01-01": 1.5,
"2020-02-01": 1.5,
"2020-03-01": 1,
"2020-04-01": 0.5,
"2020-05-01": 0.1,
"2020-06-01": 0.1,
"2020-07-01": 0.1,
"2020-08-01": 0.1,
"2020-09-01": 0.1,
"2020-10-01": 0.1,
"2020-11-01": 0.1,
"2020-12-01": 0.1,
# 2021
"2021-01-01": 0.1,
"2021-02-01": 0.1,
"2021-03-01": 0.1,
"2021-04-01": 0.1,
"2021-05-01": 0.1,
"2021-06-01": 0.1,
"2021-07-01": 0.1,
"2021-08-01": 0.1,
"2021-09-01": 0.1,
"2021-10-01": 0.5,
"2021-11-01": 1.25,
"2021-12-01": 1.75,
# 2022
"2022-01-01": 2.25,
"2022-02-01": 2.75,
"2022-03-01": 3.5,
"2022-04-01": 4.5,
"2022-05-01": 5.25,
"2022-06-01": 6,
"2022-07-01": 6.5,
"2022-08-01": 6.75,
"2022-09-01": 6.75,
"2022-10-01": 6.75,
"2022-11-01": 6.75,
"2022-12-01": 6.75,
# 2023
"2023-01-01": 6.75,
"2023-02-01": 6.75,
"2023-03-01": 6.75,
}
m3poland_dict = {
"2023-01-01": 2091314.88,
"2023-02-01": 2131400.00,
"2023-03-01": 2131400.00,
# 2022
"2022-01-01": 1985020.62,
"2022-02-01": 1985020.62,
"2022-03-01": 2009566.25,
"2022-04-01": 2009566.25,
"2022-05-01": 2009566.25,
"2022-06-01": 1998843.50,
"2022-07-01": 1998843.50,
"2022-08-01": 1998843.50,
"2022-09-01": 2062092.75,
"2022-10-01": 2062092.75,
"2022-11-01": 2062092.75,
"2022-12-01": 2091314.88,
# 2021
"2021-01-01": 1822650.12,
"2021-02-01": 1822650.12,
"2021-03-01": 1862487.75,
"2021-04-01": 1862487.75,
"2021-05-01": 1862487.75,
"2021-06-01": 1876000.62,
"2021-07-01": 1876000.62,
"2021-08-01": 1876000.62,
"2021-09-01": 1985020.62,
"2021-10-01": 1985020.62,
"2021-11-01": 1985020.62,
"2021-12-01": 1985020.62,
# 2020
"2020-01-01": 1565639.75,
"2020-02-01": 1565639.75,
"2020-03-01": 1628423.38,
"2020-04-01": 1628423.38,
"2020-05-01": 1628423.38,
"2020-06-01": 1746224.75,
"2020-07-01": 1746224.75,
"2020-08-01": 1746224.75,
"2020-09-01": 1762175.62,
"2020-10-01": 1762175.62,
"2020-11-01": 1762175.62,
"2020-12-01": 1822650.12,
# 2019
"2019-01-01": 1446093.38,
"2019-02-01": 1446093.38,
"2019-03-01": 1457187.12,
"2019-04-01": 1457187.12,
"2019-05-01": 1457187.12,
"2019-06-01": 1478217.75,
"2019-07-01": 1478217.75,
"2019-08-01": 1478217.75,
"2019-09-01": 1506171.25,
"2019-10-01": 1506171.25,
"2019-11-01": 1506171.25,
"2019-12-01": 1565639.75,
# 2018
"2018-01-01": 1324383.25,
"2018-02-01": 1324383.25,
"2018-03-01": 1325795.62,
"2018-04-01": 1325795.62,
"2018-05-01": 1325795.62,
"2018-06-01": 1352491.88,
"2018-07-01": 1352491.88,
"2018-08-01": 1352491.88,
"2018-09-01": 1376164.75,
"2018-10-01": 1376164.75,
"2018-11-01": 1376164.75,
"2018-12-01": 1446093.38,
# 2017
"2017-03-01": 1261178.12,
"2017-04-01": 1261178.12,
"2017-05-01": 1261178.12,
"2017-06-01": 1261178.12,
"2017-07-01": 1261178.12,
"2017-08-01": 1261178.12,
"2017-09-01": 1275942.38,
"2017-10-01": 1275942.38,
"2017-11-01": 1275942.38,
"2017-12-01": 1324383.25,
}
def clean_data(data):
last_index = data.iloc[-1].name
print(
"Last available date in data is:",
last_index,
"Live Prediction will be on that date.",
)
data[["Adj Close"]] = data[["Adj Close"]].div(kurs)
copper = yf.download("HG=F", start=startdate, end=enddate)
silver = yf.download("SI=F", start=startdate, end=enddate)
gold = yf.download("GLD", start=startdate, end=enddate)
sp500 = yf.download("^GSPC", start=startdate, end=enddate)
DJIA = yf.download("^DJI", start=startdate, end=enddate)
NASDAQ_Composite = yf.download("^IXIC", start=startdate, end=enddate)
FTSE_100 = yf.download("^FTSE", start=startdate, end=enddate)
DAX = yf.download("^GDAXI", start=startdate, end=enddate)
CAC_40 = yf.download("^FCHI", start=startdate, end=enddate)
NIKKEI_225 = yf.download("^N225", start=startdate, end=enddate)
SHANGHAI_Composite = yf.download("000001.SS", start=startdate, end=enddate)
Hang_Seng_Index = yf.download("^HSI", start=startdate, end=enddate)
BSE_Sensex = yf.download("^BSESN", start=startdate, end=enddate)
ASX_200 = yf.download("^AXJO", start=startdate, end=enddate)
GMMP_ETF = yf.download("PICK", start=startdate, end=enddate)
RESM_ETF = yf.download("REMX", start=startdate, end=enddate)
copper = pd.DataFrame(copper["Adj Close"])
silver = pd.DataFrame(silver["Adj Close"])
gold = pd.DataFrame(gold["Adj Close"])
rates = exchange_rates(startdate, enddate)
sp500 = pd.DataFrame(sp500["Adj Close"])
DJIA = pd.DataFrame(DJIA["Adj Close"])
NASDAQ_Composite = pd.DataFrame(NASDAQ_Composite["Adj Close"])
FTSE_100 = pd.DataFrame(FTSE_100["Adj Close"])
DAX = pd.DataFrame(DAX["Adj Close"])
CAC_40 = pd.DataFrame(CAC_40["Adj Close"])
NIKKEI_225 = pd.DataFrame(NIKKEI_225["Adj Close"])
SHANGHAI_Composite = pd.DataFrame(SHANGHAI_Composite["Adj Close"])
Hang_Seng_Index = pd.DataFrame(Hang_Seng_Index["Adj Close"])
BSE_Sensex = pd.DataFrame(BSE_Sensex["Adj Close"])
ASX_200 = pd.DataFrame(ASX_200["Adj Close"])
GMMP_ETF = pd.DataFrame(GMMP_ETF["Adj Close"])
RESM_ETF = pd.DataFrame(RESM_ETF["Adj Close"])
# Rename the "Adj Close" column to "copper_price" and "silver_price"
copper.rename(columns={"Adj Close": "copper_price"}, inplace=True)
silver.rename(columns={"Adj Close": "silver_price"}, inplace=True)
gold.rename(columns={"Adj Close": "gold_price"}, inplace=True)
sp500.rename(columns={"Adj Close": "sp500_price"}, inplace=True)
DJIA.rename(columns={"Adj Close": "DJIA_price"}, inplace=True)
NASDAQ_Composite.rename(
columns={"Adj Close": "NASDAQ_Composite_price"}, inplace=True
)
FTSE_100.rename(columns={"Adj Close": "FTSE_100_price"}, inplace=True)
DAX.rename(columns={"Adj Close": "DAX_price"}, inplace=True)
CAC_40.rename(columns={"Adj Close": "CAC_40_price"}, inplace=True)
NIKKEI_225.rename(columns={"Adj Close": "NIKKEI_225_price"}, inplace=True)
SHANGHAI_Composite.rename(
columns={"Adj Close": "SHANGHAI_Composite_price"}, inplace=True
)
Hang_Seng_Index.rename(columns={"Adj Close": "Hang_Seng_Index_price"}, inplace=True)
BSE_Sensex.rename(columns={"Adj Close": "BSE_Sensex_price"}, inplace=True)
ASX_200.rename(columns={"Adj Close": "ASX_200_price"}, inplace=True)
GMMP_ETF.rename(columns={"Adj Close": "GMMP_ETF_price"}, inplace=True)
RESM_ETF.rename(columns={"Adj Close": "RESM_ETF_price"}, inplace=True)
data = pd.merge(data, copper, how="left", on="Date")
data = pd.merge(data, silver, how="left", on="Date")
data = pd.merge(data, gold, how="left", on="Date")
data = pd.merge(data, rates, how="left", on="Date")
data = pd.merge(data, sp500, how="left", on="Date")
data = pd.merge(data, DJIA, how="left", on="Date")
data = pd.merge(data, NASDAQ_Composite, how="left", on="Date")
data = pd.merge(data, FTSE_100, how="left", on="Date")
data = pd.merge(data, DAX, how="left", on="Date")
data = pd.merge(data, CAC_40, how="left", on="Date")
data = pd.merge(data, NIKKEI_225, how="left", on="Date")
data = pd.merge(data, SHANGHAI_Composite, how="left", on="Date")
data = pd.merge(data, Hang_Seng_Index, how="left", on="Date")
data = pd.merge(data, BSE_Sensex, how="left", on="Date")
data = pd.merge(data, ASX_200, how="left", on="Date")
data = pd.merge(data, GMMP_ETF, how="left", on="Date")
data = pd.merge(data, RESM_ETF, how="left", on="Date")
wig20 = pd.read_csv("/dissertation_data/WIG20_historical_data.csv", index_col=0)
wig20["WIG20_volume"] = wig20["WIG20_volume"].apply(convert_volume_to_number)
wig20 = wig20.reset_index()
wig20["Date"] = pd.to_datetime(wig20["Date"], format="%d/%m/%Y").dt.strftime(
"%Y-%m-%d %H:%M:%S"
)
wig20.set_index("Date", inplace=True)
wig20.index = pd.to_datetime(wig20.index)
data = pd.merge(data, wig20, how="left", on="Date")
data = data.drop(["Open", "High", "Low", "Close"], axis=1)
# create a new column with month and year only
data["month_year"] = pd.to_datetime(data.index.strftime("%Y-%m"))
# loop through the inflation_dict and assign inflation rate to corresponding month
for date_str, rate in inflation_dict.items():
date = datetime.datetime.strptime(date_str, "%Y-%m-%d")
mask = data["month_year"] == date.replace(day=1)
data.loc[mask, "inflation_rate"] = rate
# loop through the intrest_dict and assign inflation rate to corresponding month
for date_str, rate in interest_dict.items():
date = datetime.datetime.strptime(date_str, "%Y-%m-%d")
mask = data["month_year"] == date.replace(day=1)
data.loc[mask, "interest_rate"] = rate
# loop through the intrest_dict and assign inflation rate to corresponding month
for date_str, rate in m3poland_dict.items():
date = datetime.datetime.strptime(date_str, "%Y-%m-%d")
mask = data["month_year"] == date.replace(day=1)
data.loc[mask, "M3_rate"] = rate
data[["M3_rate"]] = data[["M3_rate"]].div(kurs)
data["ma14"] = data["Adj Close"].rolling(window=14).mean()
data["ma50"] = data["Adj Close"].rolling(window=50).mean()
data["ma100"] = data["Adj Close"].rolling(window=100).mean()
data["ma200"] = data["Adj Close"].rolling(window=200).mean()
data["copper_price"] = data["copper_price"].ffill()
data["silver_price"] = data["silver_price"].ffill()
data["gold_price"] = data["gold_price"].ffill()
data["sp500_price"] = data["sp500_price"].ffill()
data["DJIA_price"] = data["DJIA_price"].ffill()
data["NASDAQ_Composite_price"] = data["NASDAQ_Composite_price"].ffill()
data["FTSE_100_price"] = data["FTSE_100_price"].ffill()
data["DAX_price"] = data["DAX_price"].ffill()
data["CAC_40_price"] = data["CAC_40_price"].ffill()
data["NIKKEI_225_price"] = data["NIKKEI_225_price"].ffill()
data["SHANGHAI_Composite_price"] = data["SHANGHAI_Composite_price"].ffill()
data["Hang_Seng_Index_price"] = data["Hang_Seng_Index_price"].ffill()
data["BSE_Sensex_price"] = data["BSE_Sensex_price"].ffill()
data["ASX_200_price"] = data["ASX_200_price"].ffill()
data["GMMP_ETF_price"] = data["GMMP_ETF_price"].ffill()
data["RESM_ETF_price"] = data["RESM_ETF_price"].ffill()
data["EURUSD=X_price"] = data["EURUSD=X_price"].ffill()
data["JPY=X_price"] = data["JPY=X_price"].ffill()
data["CNY=X_price"] = data["CNY=X_price"].ffill()
data["PLN=X_price"] = data["PLN=X_price"].ffill()
data["WIG20_price"] = data["WIG20_price"].ffill()
data["WIG20_volume"] = data["WIG20_volume"].ffill()
data["WIG20_change"] = data["WIG20_change"].ffill()
# convert WIG20_price to float
data["WIG20_price"] = data["WIG20_price"].str.replace(",", "").astype(float)
# convert WIG20_change to float
data["WIG20_change"] = data["WIG20_change"].str.replace("%", "").astype(float)
ma14_mean = data["ma14"].mean()
data["ma14"].fillna(ma14_mean, inplace=True)
ma50_mean = data["ma50"].mean()
data["ma50"].fillna(ma50_mean, inplace=True)
ma100_mean = data["ma100"].mean()
data["ma100"].fillna(ma100_mean, inplace=True)
ma200_mean = data["ma200"].mean()
data["ma200"].fillna(ma200_mean, inplace=True)
financial_results = pd.read_csv(
cwd + "/dissertation_data/financial_results.csv", index_col=0
)
financial_results.index = pd.to_datetime(financial_results.index)
financial_results["month_year"] = financial_results.index.strftime("%Y-%m")
financial_results["month_year"] = pd.to_datetime(financial_results["month_year"])
# create new 'dates' column
data["dates"] = data.index
# merge the two dataframes based on the month_year column
data = pd.merge(data, financial_results, on="month_year")
# Extract month from month_year column
data["month"] = data["month_year"].dt.month
# Create quarters column
data["quarters"] = data["month"].apply(lambda x: (x - 1) // 3 + 1)
data = data.drop(["month_year", "month"], axis=1)
data["target"] = data["Adj Close"].shift(
shift_back_num
) # Shift the close price 21 days up
# List of features to create lagged values for
features = ["Adj Close"]
# Add lagged values for each feature
for feature in features:
for i in range(1, 15):
data[f"{feature}_lag_{i}"] = data[feature].shift(i)
# current #future
data["target_class"] = list(
map(classify_movement, data["Adj Close"], data["target"])
)
# Get all rows with NaN target value
last_21_records = data[data["target"].isna()]
new_startdate = data["dates"].iloc[0]
new_enddate = data["dates"].iloc[-1]
print(
"\n\n\n***IMPORTANT INFORMATION***\n there is new startdate and endate due to cleaning process..."
)
print("NEW STARTDATE: ", new_startdate)
print("NEW ENDDATE: ", new_enddate)
data.dropna(inplace=True)
data = data.drop(["dates", "target"], axis=1)
last_21_records = last_21_records.drop(["target", "target_class"], axis=1)
# Create the directory if it doesn't exist
if not os.path.exists(model_version_folder_path):
os.makedirs(model_version_folder_path)
data.to_csv("/dissertation_data/kghm_" + data_version + ".csv")
last_21_records.to_csv(
"/dissertation_data/kghm_validation_" + data_version + ".csv"
)
return data, last_index, new_startdate, new_enddate, last_21_records
data, last_index, new_startdate, new_enddate, last_21_records = clean_data(data)
| false | 0 | 10,039 | 0 | 10,270 | 10,039 |
||
129691389
|
# # Import libraries
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# # Load data
data_folder = "../input/icr-identify-age-related-conditions/"
train = pd.read_csv(f"{data_folder}train.csv")
greeks = pd.read_csv(f"{data_folder}greeks.csv")
sample_submission = pd.read_csv(f"{data_folder}sample_submission.csv")
test = pd.read_csv(f"{data_folder}test.csv")
train.head()
train.columns
test.head()
# # Data preparation
to_keep = [
"DU",
"AB",
"CR",
"FI",
"EG",
"DA",
"CC",
"FL",
"EE",
"EB",
"DI",
"GH",
"DE",
"EH",
"AY",
"FC",
"CD ",
"GL",
"EP",
"CS",
"CH",
"FD ",
"BC",
"AM",
"FS",
"EU",
"DH",
]
X_train = train[to_keep]
y_train = train["Class"]
imputer = SimpleImputer(missing_values=np.nan, strategy="median")
imputer.fit(X_train)
X_train = imputer.transform(X_train)
# # Model fit
m = RandomForestClassifier(
criterion="entropy", max_features=1.0, min_samples_leaf=3, n_jobs=-1
)
m.fit(X_train, y_train)
# # Submission
submission = pd.DataFrame(columns=sample_submission.columns)
submission.head()
submission["Id"] = test["Id"]
X_test = test[to_keep]
X_test = imputer.transform(X_test)
submission["class_0"] = m.predict_proba(X_test)[:, 0]
submission["class_1"] = m.predict_proba(X_test)[:, 1]
submission.head()
submission.loc[:, ["class_0", "class_1"]] = submission.loc[
:, ["class_0", "class_1"]
].astype(np.float64)
submission.to_csv("submission.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/691/129691389.ipynb
| null | null |
[{"Id": 129691389, "ScriptId": 38548418, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 737692, "CreationDate": "05/15/2023 19:30:20", "VersionNumber": 2.0, "Title": "AML_baseline", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 91.0, "LinesInsertedFromPrevious": 21.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 70.0, "LinesInsertedFromFork": 65.0, "LinesDeletedFromFork": 12.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 26.0, "TotalVotes": 1}]
| null | null | null | null |
# # Import libraries
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# # Load data
data_folder = "../input/icr-identify-age-related-conditions/"
train = pd.read_csv(f"{data_folder}train.csv")
greeks = pd.read_csv(f"{data_folder}greeks.csv")
sample_submission = pd.read_csv(f"{data_folder}sample_submission.csv")
test = pd.read_csv(f"{data_folder}test.csv")
train.head()
train.columns
test.head()
# # Data preparation
to_keep = [
"DU",
"AB",
"CR",
"FI",
"EG",
"DA",
"CC",
"FL",
"EE",
"EB",
"DI",
"GH",
"DE",
"EH",
"AY",
"FC",
"CD ",
"GL",
"EP",
"CS",
"CH",
"FD ",
"BC",
"AM",
"FS",
"EU",
"DH",
]
X_train = train[to_keep]
y_train = train["Class"]
imputer = SimpleImputer(missing_values=np.nan, strategy="median")
imputer.fit(X_train)
X_train = imputer.transform(X_train)
# # Model fit
m = RandomForestClassifier(
criterion="entropy", max_features=1.0, min_samples_leaf=3, n_jobs=-1
)
m.fit(X_train, y_train)
# # Submission
submission = pd.DataFrame(columns=sample_submission.columns)
submission.head()
submission["Id"] = test["Id"]
X_test = test[to_keep]
X_test = imputer.transform(X_test)
submission["class_0"] = m.predict_proba(X_test)[:, 0]
submission["class_1"] = m.predict_proba(X_test)[:, 1]
submission.head()
submission.loc[:, ["class_0", "class_1"]] = submission.loc[
:, ["class_0", "class_1"]
].astype(np.float64)
submission.to_csv("submission.csv", index=False)
| false | 0 | 561 | 1 | 561 | 561 |
||
129691527
|
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import learning_curve
import optuna
from keras.backend import clear_session
import matplotlib.pyplot as plt
df = pd.read_csv("/kaggle/input/bacteria/my.csv")
df
# random_values = np.random.rand(len(df[df.Beta_corrige == 0.001]))
# df['Beta_corrige'][df.Beta_corrige == 0.001] = random_values
df = df[df.Beta_corrige != 0.001]
X = df[["ENT", "NOCA"]].values
y = df["Beta_corrige"].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model = keras.Sequential(
[
layers.Dense(2, activation="relu", input_shape=[2]),
layers.Dense(1, activation="relu"),
]
)
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.001),
loss="mse",
metrics=["mae", "mse", "accuracy"],
)
callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
history = model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
epochs=1000,
validation_split=0.1,
callbacks=[callback],
verbose=0,
)
test_loss, test_mae, test_mse, test_acc = model.evaluate(X_test, y_test, verbose=0)
print("MAE score:", test_mae)
print("MSE score:", test_mse)
# Plot MAE and MSE scores
plt.figure(figsize=(10, 6))
# plt.plot(history.history['mae'], label='MAE')
# plt.plot(history.history['mse'], label='MSE')
plt.plot(history.history["val_loss"], label="VAL_LOSS")
plt.plot(history.history["loss"], label="LOSS")
plt.xlabel("Epoch")
plt.ylabel("Score")
plt.title("MAE and MSE Scores")
plt.legend()
plt.show()
y_pred = model.predict(X_test)
test_df = pd.DataFrame(
{"y_test": y_test.tolist(), "y_pred": y_pred.tolist(), "X_test": X_test.tolist()}
)
test_df.head(10)
# Retrieve weights and biases for each layer
weights = []
biases = []
for layer in model.layers:
layer_weights, layer_biases = layer.get_weights()
weights.append(layer_weights)
biases.append(layer_biases)
def forward_propagation(inputs):
print("input", inputs.shape)
hidden = inputs
for i in range(len(weights)):
# print(i, 'w', weights[i].shape)
# print(i, 'b', biases[i].shape)
print(f"hidden {i} : ", hidden)
print("W1 * input:", np.dot(hidden, weights[i]))
print(weights[i].shape)
print(biases[i].shape)
hidden = np.dot(hidden, weights[i]) + biases[i]
hidden = np.maximum(hidden, 0) # ReLU activation function
return hidden
# Example usage
inputs = np.array([[-0.6742486351912229, -1.0585763576446536]]) # Example input data
output = forward_propagation(inputs)
print("These inputs data are the first row of test_df")
print(output)
import numpy as np
hid_activ_fun = lambda x: np.maximum(0, x) # ReLU, most used within hidden neurons
out_activ_fun = lambda x: np.maximum(
0, x
) # Identity, most used within output neurons in regressions
def forward_propagation(inputs, W1, B1, W2, B2):
hidden1 = hid_activ_fun(np.dot(W1, inputs) + B1) # shape = (nhid1, ncases)
hidden2 = out_activ_fun(np.dot(W2, hidden1) + B2)
return hidden2 # shape = (nout, ncases) = (1, ncases)
if __name__ == "__main__":
inputs = np.array(
[[-0.6742486351912229, -1.0585763576446536]]
) # shape = (ndim, ncases)
def model_get_weights():
W1, B1 = model.layers[0].get_weights()
print(W1.shape)
print(B1.shape)
W2, B2 = model.layers[1].get_weights()
print(W2.shape)
print(B2.shape)
return W1, B1, W2, B2
W1, B1, W2, B2 = model_get_weights()
print("For 3 cases (datapoint with 2 dimensions), all outputs are:")
outputs = forward_propagation(inputs, W1, B1, W2, B2).reshape(
ncases,
)
print("Input => Output")
for case in range(inputs.shape[1]):
print(inputs[:, case], " => ", outputs[case])
print(weights)
print(biases)
# **The Function for getting the best numbers of the neurons for the model**
"""
W1, B1, W2, B2, W3, B3 = model.get_weights()
def forward_propagation(inputs):
hidden1 = np.dot(inputs, W1) + B1
hidden1 = np.maximum(hidden1, 0) # ReLU activation function
hidden2 = np.dot(hidden1, W2) + B2
hidden2 = np.maximum(hidden2, 0) # ReLU activation function
output = np.dot(hidden2, W3) + B3
return output[0]
"""
import numpy as np
# This way you can play with different activation functions:
hid_activ_fun = lambda x: np.maximum(0, x) # ReLU, most used within hidden neurons
out_activ_fun = lambda x: np.maximum(
0, x
) # Identity, most used within output neurons in regressions
# For example, you can change ReLU to:
# - Leaky ReLU => lambda x : np.max(alpha*x, x), with alpha = 0.01, 0.02, ...
# - ReLU6 => lambda x : np.minimum(np.maximum(0, x), 6)
# https://towardsdatascience.com/how-to-choose-the-right-activation-function-for-neural-networks-3941ff0e6f9c
def forward_propagation(inputs, W1, B1, W2, B2):
# """Droping the transposing operator due to previously ensuring shape of weights"""
# ensure inputs.shape = (ndim, ncases)
# ensure W1.shape = (nhid1, ndim)
# ensure B1.shape = (nhid1, 1)
# hidden1 = hid_activ_fun(W1 @ inputs + B1) # shape = (nhid1, ncases)
hidden1 = hid_activ_fun(np.dot(W1, inputs) + B1)
print("W1 * input: ", W1 @ inputs)
print("W1 * input+ B1 : ", W1 @ inputs + B1)
print("hidden1: ", hidden1)
# ensure W2.shape = (nhid2, nhid1)
# ensure B2.shape = (nhid2, 1)
# hidden2 = hid_activ_fun(W2 @ hidden1 + B2) # shape = (nhid2, ncases)
# ensure W3.shape = (nout, nhid2) = (1, nhid2), as in a single neuron output
# ensure B3.shape = (nout, 1) = (1, 1), as in a single neuron output
# hidden2 = out_activ_fun(W2 @ hidden1 + B2)
hidden2 = out_activ_fun(np.dot(W2, hidden1) + B2)
print("hidden2: ", hidden2)
return hidden2 # shape = (nout, ncases) = (1, ncases)
if __name__ == "__main__":
# Example: inputs.ndim = 2, inputs.ncases = 3
inputs = np.array(
[[-0.6742486351912229, -1.0585763576446536]]
).T # shape = (ndim, ncases)
ndim, ncases = inputs.shape
# Neural network 2x6x5x1 (2 inputs, 6 and 5 hiden, 1 output)
def model_get_weights():
# """Just a fake one, in order to produce a proof of concept;
# Weights and biases in [-1,1] at random.
# This is NOT a trained network!!"""
W1, B1 = model.layers[0].get_weights()
W1 = W1.reshape(W1.shape[1], W1.shape[0])
B1 = B1.reshape(-1, 1)
print("W1: ", W1)
print("B1: ", B1)
W2, B2 = model.layers[1].get_weights()
W2 = W2.reshape(W2.shape[1], W2.shape[0])
B2 = B2.reshape(-1, 1)
print(W2)
print(B2)
# W3, B3 = model.layers[2].get_weights()
# W3 = W3.reshape(W3.shape[1],W3.shape[0])
# B3 = B3.reshape(-1,1)
# return W1, B1, W2, B2, W3, B3
return W1, B1, W2, B2
W1, B1, W2, B2 = model_get_weights()
print("For 3 cases (datapoint with 2 dimensions), all outputs are:")
outputs = forward_propagation(inputs, W1, B1, W2, B2).reshape(
ncases,
)
print("Input => Output")
for case in range(inputs.shape[1]):
print(inputs[:, case], " => ", outputs[case])
def objective(trial):
clear_session()
df = pd.read_csv("/kaggle/input/bacteria/my.csv")
random_values = np.random.rand(len(df[df.Beta_corrige == 0.001]))
df["Beta_corrige"][df.Beta_corrige == 0.001] = random_values
X = df[["ENT", "NOCA"]].values
y = df["Beta_corrige"].values
x_train, x_valid, y_train, y_valid = train_test_split(
X, y, test_size=0.2, random_state=42
)
# scaler = StandardScaler()
# x_train = scaler.fit_transform(x_train)
model = keras.Sequential(
[
layers.Dense(
trial.suggest_int("first_layer", 1, 2),
activation="relu",
input_shape=[2],
),
layers.Dense(trial.suggest_int("hidden_layer", 1, 4), activation="relu"),
layers.Dense(1, activation="relu"),
]
)
# We compile our model with a sampled learning rate.
learning_rate = trial.suggest_float("learning_rate", 1e-5, 1e-1, log=True)
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=learning_rate),
loss="mse",
metrics=["mae", "mse", "accuracy"],
)
model.fit(
x_train,
y_train,
validation_data=(x_valid, y_valid),
shuffle=True,
epochs=2000,
verbose=False,
)
# Evaluate the model accuracy on the validation set.
test_loss, test_mae, test_mse, test_acc = model.evaluate(
x_valid, y_valid, verbose=0
)
print("MAE score:", test_mae)
print("MSE score:", test_mse)
return test_loss
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=10)
print("Number of finished trials: {}".format(len(study.trials)))
print("Best trial:")
trial = study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/691/129691527.ipynb
| null | null |
[{"Id": 129691527, "ScriptId": 38305845, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7985664, "CreationDate": "05/15/2023 19:31:53", "VersionNumber": 1.0, "Title": "New_model", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 323.0, "LinesInsertedFromPrevious": 323.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import learning_curve
import optuna
from keras.backend import clear_session
import matplotlib.pyplot as plt
df = pd.read_csv("/kaggle/input/bacteria/my.csv")
df
# random_values = np.random.rand(len(df[df.Beta_corrige == 0.001]))
# df['Beta_corrige'][df.Beta_corrige == 0.001] = random_values
df = df[df.Beta_corrige != 0.001]
X = df[["ENT", "NOCA"]].values
y = df["Beta_corrige"].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model = keras.Sequential(
[
layers.Dense(2, activation="relu", input_shape=[2]),
layers.Dense(1, activation="relu"),
]
)
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.001),
loss="mse",
metrics=["mae", "mse", "accuracy"],
)
callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
history = model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
epochs=1000,
validation_split=0.1,
callbacks=[callback],
verbose=0,
)
test_loss, test_mae, test_mse, test_acc = model.evaluate(X_test, y_test, verbose=0)
print("MAE score:", test_mae)
print("MSE score:", test_mse)
# Plot MAE and MSE scores
plt.figure(figsize=(10, 6))
# plt.plot(history.history['mae'], label='MAE')
# plt.plot(history.history['mse'], label='MSE')
plt.plot(history.history["val_loss"], label="VAL_LOSS")
plt.plot(history.history["loss"], label="LOSS")
plt.xlabel("Epoch")
plt.ylabel("Score")
plt.title("MAE and MSE Scores")
plt.legend()
plt.show()
y_pred = model.predict(X_test)
test_df = pd.DataFrame(
{"y_test": y_test.tolist(), "y_pred": y_pred.tolist(), "X_test": X_test.tolist()}
)
test_df.head(10)
# Retrieve weights and biases for each layer
weights = []
biases = []
for layer in model.layers:
layer_weights, layer_biases = layer.get_weights()
weights.append(layer_weights)
biases.append(layer_biases)
def forward_propagation(inputs):
print("input", inputs.shape)
hidden = inputs
for i in range(len(weights)):
# print(i, 'w', weights[i].shape)
# print(i, 'b', biases[i].shape)
print(f"hidden {i} : ", hidden)
print("W1 * input:", np.dot(hidden, weights[i]))
print(weights[i].shape)
print(biases[i].shape)
hidden = np.dot(hidden, weights[i]) + biases[i]
hidden = np.maximum(hidden, 0) # ReLU activation function
return hidden
# Example usage
inputs = np.array([[-0.6742486351912229, -1.0585763576446536]]) # Example input data
output = forward_propagation(inputs)
print("These inputs data are the first row of test_df")
print(output)
import numpy as np
hid_activ_fun = lambda x: np.maximum(0, x) # ReLU, most used within hidden neurons
out_activ_fun = lambda x: np.maximum(
0, x
) # Identity, most used within output neurons in regressions
def forward_propagation(inputs, W1, B1, W2, B2):
hidden1 = hid_activ_fun(np.dot(W1, inputs) + B1) # shape = (nhid1, ncases)
hidden2 = out_activ_fun(np.dot(W2, hidden1) + B2)
return hidden2 # shape = (nout, ncases) = (1, ncases)
if __name__ == "__main__":
inputs = np.array(
[[-0.6742486351912229, -1.0585763576446536]]
) # shape = (ndim, ncases)
def model_get_weights():
W1, B1 = model.layers[0].get_weights()
print(W1.shape)
print(B1.shape)
W2, B2 = model.layers[1].get_weights()
print(W2.shape)
print(B2.shape)
return W1, B1, W2, B2
W1, B1, W2, B2 = model_get_weights()
print("For 3 cases (datapoint with 2 dimensions), all outputs are:")
outputs = forward_propagation(inputs, W1, B1, W2, B2).reshape(
ncases,
)
print("Input => Output")
for case in range(inputs.shape[1]):
print(inputs[:, case], " => ", outputs[case])
print(weights)
print(biases)
# **The Function for getting the best numbers of the neurons for the model**
"""
W1, B1, W2, B2, W3, B3 = model.get_weights()
def forward_propagation(inputs):
hidden1 = np.dot(inputs, W1) + B1
hidden1 = np.maximum(hidden1, 0) # ReLU activation function
hidden2 = np.dot(hidden1, W2) + B2
hidden2 = np.maximum(hidden2, 0) # ReLU activation function
output = np.dot(hidden2, W3) + B3
return output[0]
"""
import numpy as np
# This way you can play with different activation functions:
hid_activ_fun = lambda x: np.maximum(0, x) # ReLU, most used within hidden neurons
out_activ_fun = lambda x: np.maximum(
0, x
) # Identity, most used within output neurons in regressions
# For example, you can change ReLU to:
# - Leaky ReLU => lambda x : np.max(alpha*x, x), with alpha = 0.01, 0.02, ...
# - ReLU6 => lambda x : np.minimum(np.maximum(0, x), 6)
# https://towardsdatascience.com/how-to-choose-the-right-activation-function-for-neural-networks-3941ff0e6f9c
def forward_propagation(inputs, W1, B1, W2, B2):
# """Droping the transposing operator due to previously ensuring shape of weights"""
# ensure inputs.shape = (ndim, ncases)
# ensure W1.shape = (nhid1, ndim)
# ensure B1.shape = (nhid1, 1)
# hidden1 = hid_activ_fun(W1 @ inputs + B1) # shape = (nhid1, ncases)
hidden1 = hid_activ_fun(np.dot(W1, inputs) + B1)
print("W1 * input: ", W1 @ inputs)
print("W1 * input+ B1 : ", W1 @ inputs + B1)
print("hidden1: ", hidden1)
# ensure W2.shape = (nhid2, nhid1)
# ensure B2.shape = (nhid2, 1)
# hidden2 = hid_activ_fun(W2 @ hidden1 + B2) # shape = (nhid2, ncases)
# ensure W3.shape = (nout, nhid2) = (1, nhid2), as in a single neuron output
# ensure B3.shape = (nout, 1) = (1, 1), as in a single neuron output
# hidden2 = out_activ_fun(W2 @ hidden1 + B2)
hidden2 = out_activ_fun(np.dot(W2, hidden1) + B2)
print("hidden2: ", hidden2)
return hidden2 # shape = (nout, ncases) = (1, ncases)
if __name__ == "__main__":
# Example: inputs.ndim = 2, inputs.ncases = 3
inputs = np.array(
[[-0.6742486351912229, -1.0585763576446536]]
).T # shape = (ndim, ncases)
ndim, ncases = inputs.shape
# Neural network 2x6x5x1 (2 inputs, 6 and 5 hiden, 1 output)
def model_get_weights():
# """Just a fake one, in order to produce a proof of concept;
# Weights and biases in [-1,1] at random.
# This is NOT a trained network!!"""
W1, B1 = model.layers[0].get_weights()
W1 = W1.reshape(W1.shape[1], W1.shape[0])
B1 = B1.reshape(-1, 1)
print("W1: ", W1)
print("B1: ", B1)
W2, B2 = model.layers[1].get_weights()
W2 = W2.reshape(W2.shape[1], W2.shape[0])
B2 = B2.reshape(-1, 1)
print(W2)
print(B2)
# W3, B3 = model.layers[2].get_weights()
# W3 = W3.reshape(W3.shape[1],W3.shape[0])
# B3 = B3.reshape(-1,1)
# return W1, B1, W2, B2, W3, B3
return W1, B1, W2, B2
W1, B1, W2, B2 = model_get_weights()
print("For 3 cases (datapoint with 2 dimensions), all outputs are:")
outputs = forward_propagation(inputs, W1, B1, W2, B2).reshape(
ncases,
)
print("Input => Output")
for case in range(inputs.shape[1]):
print(inputs[:, case], " => ", outputs[case])
def objective(trial):
clear_session()
df = pd.read_csv("/kaggle/input/bacteria/my.csv")
random_values = np.random.rand(len(df[df.Beta_corrige == 0.001]))
df["Beta_corrige"][df.Beta_corrige == 0.001] = random_values
X = df[["ENT", "NOCA"]].values
y = df["Beta_corrige"].values
x_train, x_valid, y_train, y_valid = train_test_split(
X, y, test_size=0.2, random_state=42
)
# scaler = StandardScaler()
# x_train = scaler.fit_transform(x_train)
model = keras.Sequential(
[
layers.Dense(
trial.suggest_int("first_layer", 1, 2),
activation="relu",
input_shape=[2],
),
layers.Dense(trial.suggest_int("hidden_layer", 1, 4), activation="relu"),
layers.Dense(1, activation="relu"),
]
)
# We compile our model with a sampled learning rate.
learning_rate = trial.suggest_float("learning_rate", 1e-5, 1e-1, log=True)
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=learning_rate),
loss="mse",
metrics=["mae", "mse", "accuracy"],
)
model.fit(
x_train,
y_train,
validation_data=(x_valid, y_valid),
shuffle=True,
epochs=2000,
verbose=False,
)
# Evaluate the model accuracy on the validation set.
test_loss, test_mae, test_mse, test_acc = model.evaluate(
x_valid, y_valid, verbose=0
)
print("MAE score:", test_mae)
print("MSE score:", test_mse)
return test_loss
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=10)
print("Number of finished trials: {}".format(len(study.trials)))
print("Best trial:")
trial = study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
| false | 0 | 3,291 | 0 | 3,291 | 3,291 |
||
129705666
|
# =========================
# Import libraries
# =========================
# default
import gc, os, glob, random
from os import path
from pathlib import Path
# make data
import polars as pl
import pandas as pd
pd.set_option("display.max_columns", None)
# pd.set_option('display.max_rows', None)
import numpy as np
from tqdm.auto import tqdm
import ydata_profiling as pdp
def show_df(df, num=3, tail=True):
print(df.shape)
display(df.head(num))
if tail:
display(df.tail(num))
defog_path = glob.glob(
"/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/defog/*.csv"
)
tdcsfog_path = glob.glob(
"/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/tdcsfog/*.csv"
)
notype_path = glob.glob(
"/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/notype/*.csv"
)
# # Tasks
# ==============================================================================
# Tasks - Task metadata for series in the defog dataset.(not tdcsfog & daily)-
# ==============================================================================
tasks = pd.read_csv("/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/tasks.csv")
tasks["Duration"] = tasks["End"] - tasks["Begin"]
print("-" * 80)
print("Tasks - Task metadata for series in the defog dataset.(not tdcsfog & daily)-")
print("-" * 80)
show_df(tasks)
tasks_pivot = pd.pivot_table(
tasks,
values=["Duration"],
index=["Id"],
columns=["Task"],
aggfunc="sum",
fill_value=0,
)
tasks_pivot
train_defog_list = [os.path.basename(path).split(".cs")[0] for path in defog_path]
task_list = list(tasks.Id.unique())
print(f"lentgh of train_defog_list: {len(train_defog_list)}")
print(f"lentgh of Task : {len(task_list)}")
test_defog_list = [path for path in task_list if path not in train_defog_list]
print(*test_defog_list)
test_defog_table = tasks_pivot[tasks_pivot.index.isin(test_defog_list)]
def color_background_lightgreen(val):
color = "lightgreen" if val > 1 else "" # 1より大なら薄緑、その他は白
return "background-color: %s" % color
# 表示
test_defog_table.style.applymap(color_background_lightgreen)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/705/129705666.ipynb
| null | null |
[{"Id": 129705666, "ScriptId": 38572005, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5196442, "CreationDate": "05/15/2023 22:46:59", "VersionNumber": 1.0, "Title": "Task -metadata-", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 64.0, "LinesInsertedFromPrevious": 64.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# =========================
# Import libraries
# =========================
# default
import gc, os, glob, random
from os import path
from pathlib import Path
# make data
import polars as pl
import pandas as pd
pd.set_option("display.max_columns", None)
# pd.set_option('display.max_rows', None)
import numpy as np
from tqdm.auto import tqdm
import ydata_profiling as pdp
def show_df(df, num=3, tail=True):
print(df.shape)
display(df.head(num))
if tail:
display(df.tail(num))
defog_path = glob.glob(
"/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/defog/*.csv"
)
tdcsfog_path = glob.glob(
"/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/tdcsfog/*.csv"
)
notype_path = glob.glob(
"/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/notype/*.csv"
)
# # Tasks
# ==============================================================================
# Tasks - Task metadata for series in the defog dataset.(not tdcsfog & daily)-
# ==============================================================================
tasks = pd.read_csv("/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/tasks.csv")
tasks["Duration"] = tasks["End"] - tasks["Begin"]
print("-" * 80)
print("Tasks - Task metadata for series in the defog dataset.(not tdcsfog & daily)-")
print("-" * 80)
show_df(tasks)
tasks_pivot = pd.pivot_table(
tasks,
values=["Duration"],
index=["Id"],
columns=["Task"],
aggfunc="sum",
fill_value=0,
)
tasks_pivot
train_defog_list = [os.path.basename(path).split(".cs")[0] for path in defog_path]
task_list = list(tasks.Id.unique())
print(f"lentgh of train_defog_list: {len(train_defog_list)}")
print(f"lentgh of Task : {len(task_list)}")
test_defog_list = [path for path in task_list if path not in train_defog_list]
print(*test_defog_list)
test_defog_table = tasks_pivot[tasks_pivot.index.isin(test_defog_list)]
def color_background_lightgreen(val):
color = "lightgreen" if val > 1 else "" # 1より大なら薄緑、その他は白
return "background-color: %s" % color
# 表示
test_defog_table.style.applymap(color_background_lightgreen)
| false | 0 | 696 | 0 | 696 | 696 |
||
129705437
|
<jupyter_start><jupyter_text>Chest X-Ray Images (Pneumonia)
### Context
http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5

Figure S6. Illustrative Examples of Chest X-Rays in Patients with Pneumonia, Related to Figure 6
The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse ‘‘interstitial’’ pattern in both lungs.
http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5
### Content
The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).
Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care.
For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.
Kaggle dataset identifier: chest-xray-pneumonia
<jupyter_script># If you like my work don't forget to upvote the kernel.
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
import random
from sklearn.metrics import confusion_matrix
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns
import numpy as np
IMG_SIZE = 224
TRAINING_DIR = "/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train"
training_datagen = ImageDataGenerator(
rescale=1.0 / 255, shear_range=0.2, zoom_range=0.2
)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode="categorical",
batch_size=200,
shuffle=True,
)
TEST_DIR = "/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/test"
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_generator = test_datagen.flow_from_directory(
TEST_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode=None,
batch_size=200,
shuffle=False,
)
VAL_DIR = "/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/val"
val_datagen = ImageDataGenerator(rescale=1.0 / 255)
# val_generator = val_datagen.flow_from_directory(TEST_DIR,target_size=(IMG_SIZE,IMG_SIZE),class_mode='categorical',
# batch_size=200,shuffle= False)
val_generator = val_datagen.flow_from_directory(
VAL_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode="categorical",
batch_size=200,
shuffle=False,
)
x, y = train_generator.next()
for i in range(0, 1):
image = x[i]
plt.imshow(image)
plt.show()
import tensorflow_hub as hub
URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor = hub.KerasLayer(URL, input_shape=(224, 224, 3))
feature_extractor.trainable = False
model = tf.keras.models.Sequential(
[
feature_extractor,
tf.keras.layers.Dense(200, activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(2, activation="softmax"),
]
)
model.summary()
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs["accuracy"] >= 0.95:
self.model.stop_training = True
callbacks = myCallback()
METRICS = [
"accuracy",
tf.keras.metrics.Precision(name="precision"),
tf.keras.metrics.Recall(name="recall"),
]
random.seed(40)
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.0001),
loss="binary_crossentropy",
metrics=METRICS,
)
history = model.fit(
train_generator, epochs=50, callbacks=[callbacks], validation_data=val_generator
)
fig, ax = plt.subplots(1, 4, figsize=(20, 3))
ax = ax.ravel()
for i, met in enumerate(["precision", "recall", "accuracy", "loss"]):
ax[i].plot(history.history[met])
ax[i].plot(history.history["val_" + met])
ax[i].set_title("Model {}".format(met))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(met)
ax[i].legend(["train", "val"])
model.evaluate(val_generator)
model.save("Inception_V3.2.h5")
# Predict labels for the test set
predictions = model.predict(test_generator)
predicted_labels = np.argmax(predictions, axis=1)
# Get the true labels for the test set
true_labels = test_generator.classes
# Compute the confusion matrix
cm = confusion_matrix(true_labels, predicted_labels)
# Compute the classification report
report = classification_report(
true_labels, predicted_labels, target_names=test_generator.class_indices.keys()
)
# Compute the accuracy, precision, recall, f1 score, and specificity
accuracy = (true_labels == predicted_labels).mean()
precision = cm[1, 1] / (cm[1, 1] + cm[0, 1])
recall = cm[1, 1] / (cm[1, 1] + cm[1, 0])
f1_score = 2 * precision * recall / (precision + recall)
npv = cm[0, 0] / (cm[0, 0] + cm[1, 0])
specificity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
print("Accuracy: {:.2%}".format(accuracy))
print("Precision/PPV: {:.2%}".format(precision))
print("Recall/Sensitivity: {:.2%}".format(recall))
print("F1 Score: {:.2%}".format(f1_score))
print("NPV: {:.2%}".format(npv))
print("Specificity: {:.2%}".format(specificity))
# Plot the confusion matrix
plt.figure(figsize=(8, 8))
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.title("Confusion matrix")
plt.colorbar()
tick_marks = np.arange(len(test_generator.class_indices))
plt.xticks(tick_marks, test_generator.class_indices.keys(), rotation=90)
plt.yticks(tick_marks, test_generator.class_indices.keys())
plt.tight_layout()
plt.xlabel("Predicted")
plt.ylabel("True")
plt.show()
# Plot the training and validation accuracy
plt.figure(figsize=(8, 8))
plt.plot(history.history["accuracy"], label="Training Accuracy")
plt.plot(history.history["val_accuracy"], label="Validation Accuracy")
plt.title("Training and Validation Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.legend(loc="lower right")
plt.show()
# Plot the training and validation loss
plt.figure(figsize=(8, 8))
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.title("Training and Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(loc="upper right")
plt.show()
# Plot the heat map
test_data = test_datagen.flow_from_directory(
TEST_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode="categorical",
batch_size=200,
shuffle=False,
)
image_batch, label_batch = next(test_data)
class_names = list(test_generator.class_indices.keys())
predictions = model.predict(image_batch)
predicted_labels = np.argmax(predictions, axis=1)
true_labels = np.argmax(label_batch, axis=1)
cm = confusion_matrix(true_labels, predicted_labels)
plt.figure(figsize=(10, 10))
sns.heatmap(
cm,
annot=True,
fmt="d",
cmap="Blues",
cbar=False,
xticklabels=class_names,
yticklabels=class_names,
)
plt.xlabel("Predicted")
plt.ylabel("True")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/705/129705437.ipynb
|
chest-xray-pneumonia
|
paultimothymooney
|
[{"Id": 129705437, "ScriptId": 38565018, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10694652, "CreationDate": "05/15/2023 22:42:40", "VersionNumber": 1.0, "Title": "85% Accuracy with Transfer Learning(Inception v3)", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 170.0, "LinesInsertedFromPrevious": 98.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 72.0, "LinesInsertedFromFork": 98.0, "LinesDeletedFromFork": 8.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 72.0, "TotalVotes": 0}]
|
[{"Id": 186037390, "KernelVersionId": 129705437, "SourceDatasetVersionId": 23812}]
|
[{"Id": 23812, "DatasetId": 17810, "DatasourceVersionId": 23851, "CreatorUserId": 1314380, "LicenseName": "Other (specified in description)", "CreationDate": "03/24/2018 19:41:59", "VersionNumber": 2.0, "Title": "Chest X-Ray Images (Pneumonia)", "Slug": "chest-xray-pneumonia", "Subtitle": "5,863 images, 2 categories", "Description": "### Context\n\nhttp://www.cell.com/cell/fulltext/S0092-8674(18)30154-5\n\n\n\nFigure S6. Illustrative Examples of Chest X-Rays in Patients with Pneumonia, Related to Figure 6\nThe normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse \u2018\u2018interstitial\u2019\u2019 pattern in both lungs.\nhttp://www.cell.com/cell/fulltext/S0092-8674(18)30154-5\n\n### Content\n\nThe dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal). \n\nChest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children\u2019s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients\u2019 routine clinical care. \n\nFor the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.\n\n### Acknowledgements\n\nData: https://data.mendeley.com/datasets/rscbjbr9sj/2\n\nLicense: [CC BY 4.0][1]\n\nCitation: http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5\n\n![enter image description here][2]\n\n\n### Inspiration\n\nAutomated methods to detect and classify human diseases from medical images.\n\n\n [1]: https://creativecommons.org/licenses/by/4.0/\n [2]: https://i.imgur.com/8AUJkin.png", "VersionNotes": "train/test/val", "TotalCompressedBytes": 1237249419.0, "TotalUncompressedBytes": 1237249419.0}]
|
[{"Id": 17810, "CreatorUserId": 1314380, "OwnerUserId": 1314380.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 23812.0, "CurrentDatasourceVersionId": 23851.0, "ForumId": 25540, "Type": 2, "CreationDate": "03/22/2018 05:42:41", "LastActivityDate": "03/22/2018", "TotalViews": 2063138, "TotalDownloads": 237932, "TotalVotes": 5834, "TotalKernels": 2058}]
|
[{"Id": 1314380, "UserName": "paultimothymooney", "DisplayName": "Paul Mooney", "RegisterDate": "10/05/2017", "PerformanceTier": 5}]
|
# If you like my work don't forget to upvote the kernel.
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
import random
from sklearn.metrics import confusion_matrix
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns
import numpy as np
IMG_SIZE = 224
TRAINING_DIR = "/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train"
training_datagen = ImageDataGenerator(
rescale=1.0 / 255, shear_range=0.2, zoom_range=0.2
)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode="categorical",
batch_size=200,
shuffle=True,
)
TEST_DIR = "/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/test"
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_generator = test_datagen.flow_from_directory(
TEST_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode=None,
batch_size=200,
shuffle=False,
)
VAL_DIR = "/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/val"
val_datagen = ImageDataGenerator(rescale=1.0 / 255)
# val_generator = val_datagen.flow_from_directory(TEST_DIR,target_size=(IMG_SIZE,IMG_SIZE),class_mode='categorical',
# batch_size=200,shuffle= False)
val_generator = val_datagen.flow_from_directory(
VAL_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode="categorical",
batch_size=200,
shuffle=False,
)
x, y = train_generator.next()
for i in range(0, 1):
image = x[i]
plt.imshow(image)
plt.show()
import tensorflow_hub as hub
URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor = hub.KerasLayer(URL, input_shape=(224, 224, 3))
feature_extractor.trainable = False
model = tf.keras.models.Sequential(
[
feature_extractor,
tf.keras.layers.Dense(200, activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(2, activation="softmax"),
]
)
model.summary()
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs["accuracy"] >= 0.95:
self.model.stop_training = True
callbacks = myCallback()
METRICS = [
"accuracy",
tf.keras.metrics.Precision(name="precision"),
tf.keras.metrics.Recall(name="recall"),
]
random.seed(40)
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.0001),
loss="binary_crossentropy",
metrics=METRICS,
)
history = model.fit(
train_generator, epochs=50, callbacks=[callbacks], validation_data=val_generator
)
fig, ax = plt.subplots(1, 4, figsize=(20, 3))
ax = ax.ravel()
for i, met in enumerate(["precision", "recall", "accuracy", "loss"]):
ax[i].plot(history.history[met])
ax[i].plot(history.history["val_" + met])
ax[i].set_title("Model {}".format(met))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(met)
ax[i].legend(["train", "val"])
model.evaluate(val_generator)
model.save("Inception_V3.2.h5")
# Predict labels for the test set
predictions = model.predict(test_generator)
predicted_labels = np.argmax(predictions, axis=1)
# Get the true labels for the test set
true_labels = test_generator.classes
# Compute the confusion matrix
cm = confusion_matrix(true_labels, predicted_labels)
# Compute the classification report
report = classification_report(
true_labels, predicted_labels, target_names=test_generator.class_indices.keys()
)
# Compute the accuracy, precision, recall, f1 score, and specificity
accuracy = (true_labels == predicted_labels).mean()
precision = cm[1, 1] / (cm[1, 1] + cm[0, 1])
recall = cm[1, 1] / (cm[1, 1] + cm[1, 0])
f1_score = 2 * precision * recall / (precision + recall)
npv = cm[0, 0] / (cm[0, 0] + cm[1, 0])
specificity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
print("Accuracy: {:.2%}".format(accuracy))
print("Precision/PPV: {:.2%}".format(precision))
print("Recall/Sensitivity: {:.2%}".format(recall))
print("F1 Score: {:.2%}".format(f1_score))
print("NPV: {:.2%}".format(npv))
print("Specificity: {:.2%}".format(specificity))
# Plot the confusion matrix
plt.figure(figsize=(8, 8))
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.title("Confusion matrix")
plt.colorbar()
tick_marks = np.arange(len(test_generator.class_indices))
plt.xticks(tick_marks, test_generator.class_indices.keys(), rotation=90)
plt.yticks(tick_marks, test_generator.class_indices.keys())
plt.tight_layout()
plt.xlabel("Predicted")
plt.ylabel("True")
plt.show()
# Plot the training and validation accuracy
plt.figure(figsize=(8, 8))
plt.plot(history.history["accuracy"], label="Training Accuracy")
plt.plot(history.history["val_accuracy"], label="Validation Accuracy")
plt.title("Training and Validation Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.legend(loc="lower right")
plt.show()
# Plot the training and validation loss
plt.figure(figsize=(8, 8))
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.title("Training and Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(loc="upper right")
plt.show()
# Plot the heat map
test_data = test_datagen.flow_from_directory(
TEST_DIR,
target_size=(IMG_SIZE, IMG_SIZE),
class_mode="categorical",
batch_size=200,
shuffle=False,
)
image_batch, label_batch = next(test_data)
class_names = list(test_generator.class_indices.keys())
predictions = model.predict(image_batch)
predicted_labels = np.argmax(predictions, axis=1)
true_labels = np.argmax(label_batch, axis=1)
cm = confusion_matrix(true_labels, predicted_labels)
plt.figure(figsize=(10, 10))
sns.heatmap(
cm,
annot=True,
fmt="d",
cmap="Blues",
cbar=False,
xticklabels=class_names,
yticklabels=class_names,
)
plt.xlabel("Predicted")
plt.ylabel("True")
plt.show()
| false | 0 | 1,998 | 0 | 2,474 | 1,998 |
||
129705508
|
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.impute import SimpleImputer
import warnings
def load_data():
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
test = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
return train, test, greeks
def preprocess_data(train):
num_cols = train.drop(["Id", "Class", "EJ"], axis=1).columns
cat_cols = ["EJ"]
numerical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="median")),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numerical_transformer, num_cols),
("cat", categorical_transformer, cat_cols),
]
)
return preprocessor
def train_model(X_train, y_train, preprocessor):
best_params = {
"bootstrap": False,
"max_depth": None,
"min_samples_leaf": 1,
"min_samples_split": 2,
"n_estimators": 100,
}
rf = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", RandomForestClassifier(**best_params, random_state=1)),
]
)
rf.fit(X_train, y_train)
return rf
def evaluate_model(rf, X_train, y_train, X_valid, y_valid):
y_train_pred = rf.predict(X_train)
y_valid_pred = rf.predict(X_valid)
print("Training accuracy: ", accuracy_score(y_train, y_train_pred))
print("Validation accuracy: ", accuracy_score(y_valid, y_valid_pred))
def make_predictions(rf, test):
X_test = test.drop("Id", axis=1)
predictions = rf.predict_proba(X_test)
assert len(predictions) == len(
test
), "Number of predictions must match number of rows in test set"
return predictions
def create_submission(predictions, test):
submission = pd.DataFrame(predictions, columns=["class_0", "class_1"])
submission.insert(0, "Id", test["Id"])
return submission
def main():
warnings.filterwarnings("ignore", category=UserWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
# Load data
train, test, greeks = load_data()
# Define features and target
X = train.drop("Class", axis=1)
y = train["Class"]
# Preprocess data
preprocessor = preprocess_data(train)
# Split data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(
X, y, test_size=0.2, random_state=1
)
# Train model
rf = train_model(X_train, y_train, preprocessor)
# Evaluate model
evaluate_model(rf, X_train, y_train, X_valid, y_valid)
# Make predictions
predictions = make_predictions(rf, test)
# Create submission
submission = create_submission(predictions, test)
# Save submission to csv
submission.to_csv("submission.csv", index=False)
if __name__ == "__main__":
main()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/705/129705508.ipynb
| null | null |
[{"Id": 129705508, "ScriptId": 38481998, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11408729, "CreationDate": "05/15/2023 22:44:04", "VersionNumber": 5.0, "Title": "Age Related Conditions", "EvaluationDate": "05/15/2023", "IsChange": false, "TotalLines": 96.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 96.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
| null | null | null | null |
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.impute import SimpleImputer
import warnings
def load_data():
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
test = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
return train, test, greeks
def preprocess_data(train):
num_cols = train.drop(["Id", "Class", "EJ"], axis=1).columns
cat_cols = ["EJ"]
numerical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="median")),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numerical_transformer, num_cols),
("cat", categorical_transformer, cat_cols),
]
)
return preprocessor
def train_model(X_train, y_train, preprocessor):
best_params = {
"bootstrap": False,
"max_depth": None,
"min_samples_leaf": 1,
"min_samples_split": 2,
"n_estimators": 100,
}
rf = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", RandomForestClassifier(**best_params, random_state=1)),
]
)
rf.fit(X_train, y_train)
return rf
def evaluate_model(rf, X_train, y_train, X_valid, y_valid):
y_train_pred = rf.predict(X_train)
y_valid_pred = rf.predict(X_valid)
print("Training accuracy: ", accuracy_score(y_train, y_train_pred))
print("Validation accuracy: ", accuracy_score(y_valid, y_valid_pred))
def make_predictions(rf, test):
X_test = test.drop("Id", axis=1)
predictions = rf.predict_proba(X_test)
assert len(predictions) == len(
test
), "Number of predictions must match number of rows in test set"
return predictions
def create_submission(predictions, test):
submission = pd.DataFrame(predictions, columns=["class_0", "class_1"])
submission.insert(0, "Id", test["Id"])
return submission
def main():
warnings.filterwarnings("ignore", category=UserWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
# Load data
train, test, greeks = load_data()
# Define features and target
X = train.drop("Class", axis=1)
y = train["Class"]
# Preprocess data
preprocessor = preprocess_data(train)
# Split data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(
X, y, test_size=0.2, random_state=1
)
# Train model
rf = train_model(X_train, y_train, preprocessor)
# Evaluate model
evaluate_model(rf, X_train, y_train, X_valid, y_valid)
# Make predictions
predictions = make_predictions(rf, test)
# Create submission
submission = create_submission(predictions, test)
# Save submission to csv
submission.to_csv("submission.csv", index=False)
if __name__ == "__main__":
main()
| false | 0 | 941 | 3 | 941 | 941 |
||
129705293
|
<jupyter_start><jupyter_text>Property Listings in Kuala Lumpur
# Property Listings in Kuala Lumpur
This is the tabular result of scraping a property listing website for properties for sale in Kuala Lumpur, Malaysia. Only the overview page was scraped so individual property details are scarce.
Kaggle dataset identifier: property-listings-in-kuala-lumpur
<jupyter_script># Performing House Price Prediction for Malaysia
# This will only be solved using simple regression techniques
# importing the dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from ydata_profiling import ProfileReport
# load the data
df = pd.read_csv("/kaggle/input/property-listings-in-kuala-lumpur/data_kaggle.csv")
df.info()
# Fast EDA using Profile Report
# Cheating our way a bit and understanding the data using Profile Report
# let's do some profile report
Report = ProfileReport(df, title="KL Property Dataset")
Report.to_notebook_iframe()
# Based on the report above, we have a lot of missing data, and a lot of works needs to be done.
# 1. Firstly, we need to set our target. We will be using price as our target column. But it needs to be cleaned first.
# 2. Next, we need to handled the missing value. The highest missing values comes from car parks, furnishings, bathrooms and rooms. All needs to be handled. All of this will need some preprocessing first
# 3. Next, we need to do some label encoding. Rooms and Size is the tricky part here. For rooms, need to think of a way to handle the + sign. For size, I think it's better if i split it, and keep the built/land into a separate column, while size in numbers would be a separate column. Then,only then we can handle the data.
# Pre-Processing
# Let's do some preprocessing by defining what we want to do to each columns
# let's set our target column
def target_preprocess(df, col):
df[col] = df[col].str.replace("RM", "").str.replace(",", "").apply(pd.to_numeric)
df = df.loc[df[col].notna()]
return df
df = target_preprocess(df, "Price")
df.isna().sum()
# Alright, let's talk about missing values.
# 1. For furnishing and car parks we can make same assumption. We can fill that with value of 0. Because, the missing values might comes from trhe property doesn't have furnishing or car parks to begin with.
# 2. For rooms, bathrooms and size, I think that is abit trick. Rooms and bathrooms can be filled using mode/median. However, because both of these columns have high correlations with price, I think that is not advisable. Let's just drop this value so that it does not mess up our predictions
# define the functions to fill the missing values and drop the missing values
def fill_nan(df, column):
"""
This function takes a DataFrame and a column name as input and fills the missing values in the
specified column with 0. It returns the modified DataFrame.
"""
df[column] = df[column].fillna(0)
return df
def drop_nan(df, column):
"""
This function takes a DataFrame and a column name as input and drop the missing vaalue. It returns the modified DataFrame.
"""
df = df.loc[df[column].notna()]
return df
df = fill_nan(df, "Car Parks")
df = fill_nan(df, "Furnishing")
df.isna().sum()
df = drop_nan(df, "Rooms")
df = drop_nan(df, "Bathrooms")
df = drop_nan(df, "Size")
df.isna().sum()
# Allright, now our missing values are handle. Let's check how many columns we have left
df.info()
# Allright, time to handle our size and room columns
df["Rooms"].value_counts()
# define functions to clean room columns
def clean_room_type(x):
if isinstance(x, int):
return float(x)
elif isinstance(x, str):
if "+" in x:
nums = [int(n) for n in x.split("+") if n]
return sum(nums) / len(nums)
elif x == "20 above":
return 25.0
elif x == "studio":
return 4.0
return 3
# apply the functions
df["Rooms"] = df["Rooms"].apply(clean_room_type)
df["Rooms"].value_counts()
df["Size"].value_counts()
# defining clean up functions
import ast
def clean_up_size(df, col):
df[["SizeType", "SizeValue"]] = df[col].str.extract(r"^([^:]+) : (.*) sq\. ft\.$")
df["SizeValue"] = (
df["SizeValue"].str.replace(",", "").str.replace("x", "*").str.replace("X", "*")
)
def evaluate_expression(expr):
try:
return ast.literal_eval(expr)
except:
return None
df["SizeValue"] = df["SizeValue"].apply(evaluate_expression).astype(float)
return df
df = clean_up_size(df, "Size")
df
# All right, we have done our preprocessing. Let's clean it up a bit before we move on to label encoding
# first, let us look wether our preprocessing has created a new nan value
df.isna().sum()
# let's fill the nan with our pre-define function
df = fill_nan(df, "SizeValue")
df = fill_nan(df, "SizeType")
df.isna().sum()
# and now let's drop some columns
df = df.drop("Size", axis=1)
df.info()
# Label Encoding
# Let's prepare our data for training
from sklearn.preprocessing import LabelEncoder
# let's define the function for label encoding
def label_encoding(df):
for column in df.columns:
if df[column].dtype == "object":
df[column] = df[column].astype(str)
le = LabelEncoder()
df[column] = le.fit_transform(df[column])
return df
# let's label encode them
df = label_encoding(df)
df
# and now let's split our data for X and y
X = df.drop("Price", axis=1)
y = df.Price
# Let's do some cross validation
# With Repeated K Fold
# import additional libraries and dependencies
from sklearn.model_selection import RepeatedKFold
from sklearn.metrics import r2_score
import xgboost as xgb
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/705/129705293.ipynb
|
property-listings-in-kuala-lumpur
|
dragonduck
|
[{"Id": 129705293, "ScriptId": 38418539, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14329920, "CreationDate": "05/15/2023 22:40:01", "VersionNumber": 1.0, "Title": "KL House Price Prediction", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 166.0, "LinesInsertedFromPrevious": 166.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 186037057, "KernelVersionId": 129705293, "SourceDatasetVersionId": 533897}]
|
[{"Id": 533897, "DatasetId": 254011, "DatasourceVersionId": 550262, "CreatorUserId": 1347858, "LicenseName": "CC0: Public Domain", "CreationDate": "07/04/2019 06:31:19", "VersionNumber": 1.0, "Title": "Property Listings in Kuala Lumpur", "Slug": "property-listings-in-kuala-lumpur", "Subtitle": "Web scraping results of a property listing portal", "Description": "# Property Listings in Kuala Lumpur\nThis is the tabular result of scraping a property listing website for properties for sale in Kuala Lumpur, Malaysia. Only the overview page was scraped so individual property details are scarce.", "VersionNotes": "Initial release", "TotalCompressedBytes": 5914989.0, "TotalUncompressedBytes": 611897.0}]
|
[{"Id": 254011, "CreatorUserId": 1347858, "OwnerUserId": 1347858.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 533897.0, "CurrentDatasourceVersionId": 550262.0, "ForumId": 265269, "Type": 2, "CreationDate": "07/04/2019 06:31:19", "LastActivityDate": "07/04/2019", "TotalViews": 22081, "TotalDownloads": 2655, "TotalVotes": 55, "TotalKernels": 6}]
|
[{"Id": 1347858, "UserName": "dragonduck", "DisplayName": "Jan S", "RegisterDate": "10/20/2017", "PerformanceTier": 1}]
|
# Performing House Price Prediction for Malaysia
# This will only be solved using simple regression techniques
# importing the dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from ydata_profiling import ProfileReport
# load the data
df = pd.read_csv("/kaggle/input/property-listings-in-kuala-lumpur/data_kaggle.csv")
df.info()
# Fast EDA using Profile Report
# Cheating our way a bit and understanding the data using Profile Report
# let's do some profile report
Report = ProfileReport(df, title="KL Property Dataset")
Report.to_notebook_iframe()
# Based on the report above, we have a lot of missing data, and a lot of works needs to be done.
# 1. Firstly, we need to set our target. We will be using price as our target column. But it needs to be cleaned first.
# 2. Next, we need to handled the missing value. The highest missing values comes from car parks, furnishings, bathrooms and rooms. All needs to be handled. All of this will need some preprocessing first
# 3. Next, we need to do some label encoding. Rooms and Size is the tricky part here. For rooms, need to think of a way to handle the + sign. For size, I think it's better if i split it, and keep the built/land into a separate column, while size in numbers would be a separate column. Then,only then we can handle the data.
# Pre-Processing
# Let's do some preprocessing by defining what we want to do to each columns
# let's set our target column
def target_preprocess(df, col):
df[col] = df[col].str.replace("RM", "").str.replace(",", "").apply(pd.to_numeric)
df = df.loc[df[col].notna()]
return df
df = target_preprocess(df, "Price")
df.isna().sum()
# Alright, let's talk about missing values.
# 1. For furnishing and car parks we can make same assumption. We can fill that with value of 0. Because, the missing values might comes from trhe property doesn't have furnishing or car parks to begin with.
# 2. For rooms, bathrooms and size, I think that is abit trick. Rooms and bathrooms can be filled using mode/median. However, because both of these columns have high correlations with price, I think that is not advisable. Let's just drop this value so that it does not mess up our predictions
# define the functions to fill the missing values and drop the missing values
def fill_nan(df, column):
"""
This function takes a DataFrame and a column name as input and fills the missing values in the
specified column with 0. It returns the modified DataFrame.
"""
df[column] = df[column].fillna(0)
return df
def drop_nan(df, column):
"""
This function takes a DataFrame and a column name as input and drop the missing vaalue. It returns the modified DataFrame.
"""
df = df.loc[df[column].notna()]
return df
df = fill_nan(df, "Car Parks")
df = fill_nan(df, "Furnishing")
df.isna().sum()
df = drop_nan(df, "Rooms")
df = drop_nan(df, "Bathrooms")
df = drop_nan(df, "Size")
df.isna().sum()
# Allright, now our missing values are handle. Let's check how many columns we have left
df.info()
# Allright, time to handle our size and room columns
df["Rooms"].value_counts()
# define functions to clean room columns
def clean_room_type(x):
if isinstance(x, int):
return float(x)
elif isinstance(x, str):
if "+" in x:
nums = [int(n) for n in x.split("+") if n]
return sum(nums) / len(nums)
elif x == "20 above":
return 25.0
elif x == "studio":
return 4.0
return 3
# apply the functions
df["Rooms"] = df["Rooms"].apply(clean_room_type)
df["Rooms"].value_counts()
df["Size"].value_counts()
# defining clean up functions
import ast
def clean_up_size(df, col):
df[["SizeType", "SizeValue"]] = df[col].str.extract(r"^([^:]+) : (.*) sq\. ft\.$")
df["SizeValue"] = (
df["SizeValue"].str.replace(",", "").str.replace("x", "*").str.replace("X", "*")
)
def evaluate_expression(expr):
try:
return ast.literal_eval(expr)
except:
return None
df["SizeValue"] = df["SizeValue"].apply(evaluate_expression).astype(float)
return df
df = clean_up_size(df, "Size")
df
# All right, we have done our preprocessing. Let's clean it up a bit before we move on to label encoding
# first, let us look wether our preprocessing has created a new nan value
df.isna().sum()
# let's fill the nan with our pre-define function
df = fill_nan(df, "SizeValue")
df = fill_nan(df, "SizeType")
df.isna().sum()
# and now let's drop some columns
df = df.drop("Size", axis=1)
df.info()
# Label Encoding
# Let's prepare our data for training
from sklearn.preprocessing import LabelEncoder
# let's define the function for label encoding
def label_encoding(df):
for column in df.columns:
if df[column].dtype == "object":
df[column] = df[column].astype(str)
le = LabelEncoder()
df[column] = le.fit_transform(df[column])
return df
# let's label encode them
df = label_encoding(df)
df
# and now let's split our data for X and y
X = df.drop("Price", axis=1)
y = df.Price
# Let's do some cross validation
# With Repeated K Fold
# import additional libraries and dependencies
from sklearn.model_selection import RepeatedKFold
from sklearn.metrics import r2_score
import xgboost as xgb
| false | 1 | 1,517 | 3 | 1,612 | 1,517 |
||
129705360
|
<jupyter_start><jupyter_text>Preprocessed FOG Dataset
Kaggle dataset identifier: fog-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("/kaggle/input/fog-dataset/fog_dataset.csv")
data
X = data[["AccV", "AccML", "AccAP"]]
y_StartHes = data["StartHesitation"]
y_Turn = data["Turn"]
y_Walk = data["Walking"]
from sklearn.model_selection import train_test_split
# для переменной Turn
X_Turn_train, X_Turn_test, y_Turn_train, y_Turn_test = train_test_split(
X, y_Turn, test_size=0.2, random_state=42
)
# для переменной Turn
X_StartHes_train, X_StartHes_test, y_StartHes_train, y_StartHes_test = train_test_split(
X, y_StartHes, test_size=0.2, random_state=42
)
# для переменной Turn
X_Walk_train, X_Walk_test, y_Walk_train, y_Walk_test = train_test_split(
X, y_Walk, test_size=0.2, random_state=42
)
def chunk(x, y, chunksize=20000):
l = len(x)
for ndx in range(0, l, chunksize):
yield x[ndx : min(ndx + chunksize, l)], y[ndx : min(ndx + chunksize, l)]
clf_Turn = SGDClassifier(
alpha=0.0001, loss="log_loss", n_jobs=-1, shuffle=True, max_iter=100
)
chunk_generator = chunk(X_Turn_train, y_Turn_train)
for index, (chunk_X, chunk_y) in enumerate(chunk_generator):
clf_Turn.partial_fit(chunk_X, chunk_y, classes=[0, 1])
y_Turn_predicted = clf_Turn.predict(X_test)
print(accuracy_score(y_Turn_test, y_Turn_predicted))
from sklearn.linear_model import SGDClassifier
import random
clf2 = SGDClassifier(loss="log")
shuffledRange = range(len(X))
n_iter = 5
for n in range(n_iter):
random.shuffle(shuffledRange)
shuffledX = [X[i] for i in shuffledRange]
shuffledY = [Y[i] for i in shuffledRange]
for batch in batches(range(len(shuffledX)), 10000):
clf2.partial_fit(
shuffledX[batch[0] : batch[-1] + 1],
shuffledY[batch[0] : batch[-1] + 1],
classes=numpy.unique(Y),
)
from sklearn.linear_model import SGDClassifier
from tqdm.notebook import tqdm
chunksize = 20000
clf_SH = SGDClassifier(alpha=0.0001, loss="log", penalty="l2", n_jobs=-1, shuffle=True)
for train_df in tqdm(
pd.read_csv(
"/kaggle/input/fog-dataset/fog_dataset.csv", chunksize=chunksize, iterator=True
)
):
X = train_df[["AccV", "AccML", "AccAP"]]
Y = train_df["StartHesitation"]
clf_SH.partial_fit(X, Y, classes=[0, 1])
from sklearn.metrics import average_precision_score
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/705/129705360.ipynb
|
fog-dataset
|
aerikg
|
[{"Id": 129705360, "ScriptId": 38519248, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6171471, "CreationDate": "05/15/2023 22:41:15", "VersionNumber": 2.0, "Title": "notebook1127797ef2", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 74.0, "LinesInsertedFromPrevious": 43.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 31.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186037170, "KernelVersionId": 129705360, "SourceDatasetVersionId": 5573463}]
|
[{"Id": 5573463, "DatasetId": 3168620, "DatasourceVersionId": 5648287, "CreatorUserId": 12406707, "LicenseName": "Unknown", "CreationDate": "05/01/2023 11:15:51", "VersionNumber": 4.0, "Title": "Preprocessed FOG Dataset", "Slug": "fog-dataset", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-05-01", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3168620, "CreatorUserId": 12406707, "OwnerUserId": 12406707.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5573463.0, "CurrentDatasourceVersionId": 5648287.0, "ForumId": 3232837, "Type": 2, "CreationDate": "04/22/2023 19:25:46", "LastActivityDate": "04/22/2023", "TotalViews": 176, "TotalDownloads": 19, "TotalVotes": 0, "TotalKernels": 4}]
|
[{"Id": 12406707, "UserName": "aerikg", "DisplayName": "\u042d\u0440\u0438\u043a \u0410\u0431\u0434\u0443\u0440\u0430\u0445\u043c\u0430\u043d\u043e\u0432", "RegisterDate": "11/14/2022", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("/kaggle/input/fog-dataset/fog_dataset.csv")
data
X = data[["AccV", "AccML", "AccAP"]]
y_StartHes = data["StartHesitation"]
y_Turn = data["Turn"]
y_Walk = data["Walking"]
from sklearn.model_selection import train_test_split
# для переменной Turn
X_Turn_train, X_Turn_test, y_Turn_train, y_Turn_test = train_test_split(
X, y_Turn, test_size=0.2, random_state=42
)
# для переменной Turn
X_StartHes_train, X_StartHes_test, y_StartHes_train, y_StartHes_test = train_test_split(
X, y_StartHes, test_size=0.2, random_state=42
)
# для переменной Turn
X_Walk_train, X_Walk_test, y_Walk_train, y_Walk_test = train_test_split(
X, y_Walk, test_size=0.2, random_state=42
)
def chunk(x, y, chunksize=20000):
l = len(x)
for ndx in range(0, l, chunksize):
yield x[ndx : min(ndx + chunksize, l)], y[ndx : min(ndx + chunksize, l)]
clf_Turn = SGDClassifier(
alpha=0.0001, loss="log_loss", n_jobs=-1, shuffle=True, max_iter=100
)
chunk_generator = chunk(X_Turn_train, y_Turn_train)
for index, (chunk_X, chunk_y) in enumerate(chunk_generator):
clf_Turn.partial_fit(chunk_X, chunk_y, classes=[0, 1])
y_Turn_predicted = clf_Turn.predict(X_test)
print(accuracy_score(y_Turn_test, y_Turn_predicted))
from sklearn.linear_model import SGDClassifier
import random
clf2 = SGDClassifier(loss="log")
shuffledRange = range(len(X))
n_iter = 5
for n in range(n_iter):
random.shuffle(shuffledRange)
shuffledX = [X[i] for i in shuffledRange]
shuffledY = [Y[i] for i in shuffledRange]
for batch in batches(range(len(shuffledX)), 10000):
clf2.partial_fit(
shuffledX[batch[0] : batch[-1] + 1],
shuffledY[batch[0] : batch[-1] + 1],
classes=numpy.unique(Y),
)
from sklearn.linear_model import SGDClassifier
from tqdm.notebook import tqdm
chunksize = 20000
clf_SH = SGDClassifier(alpha=0.0001, loss="log", penalty="l2", n_jobs=-1, shuffle=True)
for train_df in tqdm(
pd.read_csv(
"/kaggle/input/fog-dataset/fog_dataset.csv", chunksize=chunksize, iterator=True
)
):
X = train_df[["AccV", "AccML", "AccAP"]]
Y = train_df["StartHesitation"]
clf_SH.partial_fit(X, Y, classes=[0, 1])
from sklearn.metrics import average_precision_score
| false | 1 | 1,028 | 0 | 1,050 | 1,028 |
||
129705408
|
<jupyter_start><jupyter_text>NBA Database
<blockquote><h2>Welcome to the <i><b>NBA Database</b></i>! 👋 🏀 ⛹️♂️ </h2></blockquote>
This dataset is updated daily and includes:
- **30** teams
- **4800+** players
- **60,000+** games (every game since the inaugural 1946-47 NBA season)
- **Box Scores** for over 95% of all games
- **Play-by-Play** game data with ***13M+ rows*** of Play-by-Play data in all!
---
- See [here](https://www.kaggle.com/wyattowalsh/using-sql) for tips on using SQL with this database
- [daily updater notebook](https://www.kaggle.com/code/wyattowalsh/database-updater-daily) and [monthly updater notebook](https://www.kaggle.com/code/wyattowalsh/database-updater-monthly)
⮕ View the <a href="https://github.com/wyattowalsh/nba-db">associated GitHub repo<img src="https://gist.githubusercontent.com/wyattowalsh/33b635109116e07044c6336527681051/raw/6b24b749532f4e167657fcc014a310b8c4bfa661/github.svg"></a> and [code base docs site 📄](https://nba-db.readthedocs.io/)
⮕ Sponsor project: <a href="https://github.com/sponsors/wyattowalsh"><img src="https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86"></a>
---
<h5>Built With:</h5>
<a href="https://www.kaggle.com/docs" target="_blank"><img alt="Kaggle" src="https://img.shields.io/badge/kaggle-%2320BEFF.svg?&style=for-the-badge&logo=kaggle&logoColor=white"></a><a href="https://docs.github.com/en" target="_blank"><img alt="GitHub" src="https://img.shields.io/badge/github-%23181717.svg?&style=for-the-badge&logo=github&logoColor=white"></a><a href="https://docs.python.org/3/" target="_blank"><img alt="Python" src="https://img.shields.io/badge/python%20-%2314354C.svg?&style=for-the-badge&logo=python&logoColor=white"></a><a href="https://sqlite.org/docs.html" target="_blank"><img alt="SQLite" src="https://img.shields.io/badge/sqlite%20-%23003B57.svg?&style=for-the-badge&logo=sqlite&logoColor=white"></a> <img src="https://raw.githubusercontent.com/wyattowalsh/nba-db/main/docs/_static/img/logo.svg">
Kaggle dataset identifier: basketball
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
from scipy.stats import ttest_ind
games = pd.read_csv("/kaggle/input/basketball/csv/game.csv")
print(games.head())
games["game_date"] = pd.to_datetime(games["game_date"])
last_10 = games[games["game_date"] >= "2013-07-07"]
print(last_10.head())
# This code was to filter games only in the past 10 seasons.
home_wins = (last_10["wl_home"] == "W").sum()
home_losses = (last_10["wl_home"] == "L").sum()
home_winning_pct = home_wins / (home_wins + home_losses)
print("Number of Home Wins:", home_wins)
print("Number of Home Losses:", home_losses)
print("Home Winning Percentage:", home_winning_pct)
# We can see that home teams win about 57.2% of the time.
last_10_t = last_10[
(~pd.isnull(last_10["fg3_pct_home"])) & (~pd.isnull(last_10["fg3_pct_away"]))
]
t_statistic, p_value = ttest_ind(last_10["fg3_pct_home"], last_10["fg3_pct_away"])
print("t-statistic:", t_statistic)
print("p-value:", p_value)
avg_3_home = last_10["fg3_pct_home"].mean()
avg_3_away = last_10["fg3_pct_away"].mean()
print("Average Home 3 Point Shooting:", avg_3_home)
print("Average Away 3 Point Shooting:", avg_3_away)
diff_3_home_away = avg_3_home - avg_3_away
print("Home Court 3 Point Advantage:", diff_3_home_away)
# We can see that the average advantage gained by home teams is worth ~0.865% in 3 point shooting. However, because the sample size is so large, this is a statistically significant result. The p-value is extremely small in our 2 sample t-test
last_10_t = last_10[
(~pd.isnull(last_10["fg_pct_home"])) & (~pd.isnull(last_10["fg_pct_away"]))
]
t_statistic, p_value = ttest_ind(last_10["fg_pct_home"], last_10["fg_pct_away"])
print("t-statistic:", t_statistic)
print("p-value:", p_value)
avg_fg_home = last_10["fg_pct_home"].mean()
avg_fg_away = last_10["fg_pct_away"].mean()
print("Average Home FG Shooting:", avg_fg_home)
print("Average Away FG Shooting:", avg_fg_away)
diff_fg_home_away = avg_fg_home - avg_fg_away
print("Home Court FG% Advantage:", diff_fg_home_away)
# Again, we see a similar trend, with a slightly larger 0.953% home team advantage for FGs. The p-value is even smaller this time, indicating an even more siginificant statistical difference between home and away team FG shooting performance compared to 3s.
teams = last_10["team_abbreviation_home"].unique()
for team in teams:
team_data = last_10[last_10["team_abbreviation_home"] == team]
num_wins = (team_data["wl_home"] == "W").sum()
num_losses = (team_data["wl_home"] == "L").sum()
win_pct = num_wins / (num_wins + num_losses)
print(team, "Win Pct at Home:", win_pct)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/705/129705408.ipynb
|
basketball
| null |
[{"Id": 129705408, "ScriptId": 38571230, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13405093, "CreationDate": "05/15/2023 22:42:11", "VersionNumber": 1.0, "Title": "NBA Home/Away Shooting", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 86.0, "LinesInsertedFromPrevious": 86.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186037336, "KernelVersionId": 129705408, "SourceDatasetVersionId": 5620892}]
|
[{"Id": 5620892, "DatasetId": 1218020, "DatasourceVersionId": 5696085, "CreatorUserId": 2507257, "LicenseName": "CC BY-SA 4.0", "CreationDate": "05/06/2023 19:46:24", "VersionNumber": 228.0, "Title": "NBA Database", "Slug": "basketball", "Subtitle": "Daily Updated SQLite Database \u2014 64,000+ Games, 4800+ Players, and 30 Teams \ud83c\udfc0", "Description": "<blockquote><h2>Welcome to the <i><b>NBA Database</b></i>! \ud83d\udc4b \ud83c\udfc0 \u26f9\ufe0f\u200d\u2642\ufe0f </h2></blockquote>\n\nThis dataset is updated daily and includes:\n\n- **30** teams\n- **4800+** players\n- **60,000+** games (every game since the inaugural 1946-47 NBA season)\n- **Box Scores** for over 95% of all games\n- **Play-by-Play** game data with ***13M+ rows*** of Play-by-Play data in all!\n\n\n---\n\n- See [here](https://www.kaggle.com/wyattowalsh/using-sql) for tips on using SQL with this database\n- [daily updater notebook](https://www.kaggle.com/code/wyattowalsh/database-updater-daily) and [monthly updater notebook](https://www.kaggle.com/code/wyattowalsh/database-updater-monthly)\n\n\u2b95 View the <a href=\"https://github.com/wyattowalsh/nba-db\">associated GitHub repo<img src=\"https://gist.githubusercontent.com/wyattowalsh/33b635109116e07044c6336527681051/raw/6b24b749532f4e167657fcc014a310b8c4bfa661/github.svg\"></a> and [code base docs site \ud83d\udcc4](https://nba-db.readthedocs.io/)\n\u2b95 Sponsor project: <a href=\"https://github.com/sponsors/wyattowalsh\"><img src=\"https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86\"></a>\n\n---\n\n<h5>Built With:</h5>\n \n<a href=\"https://www.kaggle.com/docs\" target=\"_blank\"><img alt=\"Kaggle\" src=\"https://img.shields.io/badge/kaggle-%2320BEFF.svg?&style=for-the-badge&logo=kaggle&logoColor=white\"></a><a href=\"https://docs.github.com/en\" target=\"_blank\"><img alt=\"GitHub\" src=\"https://img.shields.io/badge/github-%23181717.svg?&style=for-the-badge&logo=github&logoColor=white\"></a><a href=\"https://docs.python.org/3/\" target=\"_blank\"><img alt=\"Python\" src=\"https://img.shields.io/badge/python%20-%2314354C.svg?&style=for-the-badge&logo=python&logoColor=white\"></a><a href=\"https://sqlite.org/docs.html\" target=\"_blank\"><img alt=\"SQLite\" src=\"https://img.shields.io/badge/sqlite%20-%23003B57.svg?&style=for-the-badge&logo=sqlite&logoColor=white\"></a> <img src=\"https://raw.githubusercontent.com/wyattowalsh/nba-db/main/docs/_static/img/logo.svg\">", "VersionNotes": "Monthly update: 2023-05-06", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1218020, "CreatorUserId": 2507257, "OwnerUserId": 2507257.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 6090760.0, "CurrentDatasourceVersionId": 6169227.0, "ForumId": 1236110, "Type": 2, "CreationDate": "03/18/2021 00:21:25", "LastActivityDate": "03/18/2021", "TotalViews": 193588, "TotalDownloads": 19989, "TotalVotes": 513, "TotalKernels": 30}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
from scipy.stats import ttest_ind
games = pd.read_csv("/kaggle/input/basketball/csv/game.csv")
print(games.head())
games["game_date"] = pd.to_datetime(games["game_date"])
last_10 = games[games["game_date"] >= "2013-07-07"]
print(last_10.head())
# This code was to filter games only in the past 10 seasons.
home_wins = (last_10["wl_home"] == "W").sum()
home_losses = (last_10["wl_home"] == "L").sum()
home_winning_pct = home_wins / (home_wins + home_losses)
print("Number of Home Wins:", home_wins)
print("Number of Home Losses:", home_losses)
print("Home Winning Percentage:", home_winning_pct)
# We can see that home teams win about 57.2% of the time.
last_10_t = last_10[
(~pd.isnull(last_10["fg3_pct_home"])) & (~pd.isnull(last_10["fg3_pct_away"]))
]
t_statistic, p_value = ttest_ind(last_10["fg3_pct_home"], last_10["fg3_pct_away"])
print("t-statistic:", t_statistic)
print("p-value:", p_value)
avg_3_home = last_10["fg3_pct_home"].mean()
avg_3_away = last_10["fg3_pct_away"].mean()
print("Average Home 3 Point Shooting:", avg_3_home)
print("Average Away 3 Point Shooting:", avg_3_away)
diff_3_home_away = avg_3_home - avg_3_away
print("Home Court 3 Point Advantage:", diff_3_home_away)
# We can see that the average advantage gained by home teams is worth ~0.865% in 3 point shooting. However, because the sample size is so large, this is a statistically significant result. The p-value is extremely small in our 2 sample t-test
last_10_t = last_10[
(~pd.isnull(last_10["fg_pct_home"])) & (~pd.isnull(last_10["fg_pct_away"]))
]
t_statistic, p_value = ttest_ind(last_10["fg_pct_home"], last_10["fg_pct_away"])
print("t-statistic:", t_statistic)
print("p-value:", p_value)
avg_fg_home = last_10["fg_pct_home"].mean()
avg_fg_away = last_10["fg_pct_away"].mean()
print("Average Home FG Shooting:", avg_fg_home)
print("Average Away FG Shooting:", avg_fg_away)
diff_fg_home_away = avg_fg_home - avg_fg_away
print("Home Court FG% Advantage:", diff_fg_home_away)
# Again, we see a similar trend, with a slightly larger 0.953% home team advantage for FGs. The p-value is even smaller this time, indicating an even more siginificant statistical difference between home and away team FG shooting performance compared to 3s.
teams = last_10["team_abbreviation_home"].unique()
for team in teams:
team_data = last_10[last_10["team_abbreviation_home"] == team]
num_wins = (team_data["wl_home"] == "W").sum()
num_losses = (team_data["wl_home"] == "L").sum()
win_pct = num_wins / (num_wins + num_losses)
print(team, "Win Pct at Home:", win_pct)
| false | 0 | 1,130 | 0 | 1,940 | 1,130 |
||
129036266
|
<jupyter_start><jupyter_text>Connecticut Real Estate Sales Data
```
The Office of Policy and Management maintains a listing of all real estate sales with a sales price of $2,000 or greater that occur between October 1 and September 30 of each year. For each sale record, the file includes: town, property address, date of sale, property type (residential, apartment, commercial, industrial or vacant land), sales price, and property assessment.
Data are collected in accordance with Connecticut General Statutes, section 10-261a and 10-261b: https://www.cga.ct.gov/current/pub/chap_172.htm#sec_10-261a and https://www.cga.ct.gov/current/pub/chap_172.htm#sec_10-261b. Annual real estate sales are reported by grand list year (October 1 through September 30 each year). For instance, sales from 2018 GL are from 10/01/2018 through 9/30/2019.
```
| Column Name | Description |
|-------------------|------------------------------------------------------------|
| Serial Number | A unique identifier for each record in the dataset. |
| List Year | The grand list year in which the sale was recorded. |
| Date Recorded | The date when the sale was recorded. |
| Town | The town where the property is located. |
| Address | The address of the property. |
| Assessed Value | The assessed value of the property. |
| Sale Amount | The sales price of the property. |
| Sales Ratio | The sales ratio of the property. |
| Property Type | The type of the property (residential, apartment, commercial, industrial, or vacant land). |
| Residential Type | The type of residential property (if applicable). |
| Non Use Code | The non-use code associated with the property (if applicable). |
| Assessor Remarks | Remarks or comments provided by the assessor (if available). |
| OPM Remarks | Remarks or comments provided by the Office of Policy and Management (if available). |
| Location | The location of the property (if available). |
Kaggle dataset identifier: real-estate-sales-2001-2020-gl
<jupyter_script>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
df = pd.read_csv(
"/kaggle/input/real-estate-sales-2001-2020-gl/Real_Estate_Sales_2001-2020_GL.csv"
)
df.info()
# for numerical data
df.describe()
# for categorical data
df.describe(include="all")
df.sample(2)
df["OPM remarks"].value_counts()
df = df.drop("OPM remarks", axis=1)
df = df.dropna()
df.info()
df["Date Recorded"].values
# # **Time series analysis**
# **convert type (object) to datetime**
df["Date Recorded"] = pd.to_datetime(df["Date Recorded"])
df["Date Recorded"].values
# **Make column for year**
df["Year"] = df["Date Recorded"].dt.year
# **Make column for month name**
#
df["Month"] = df["Date Recorded"].dt.month_name()
# **Make column for day name**
df["Day"] = df["Date Recorded"].dt.day_name()
# **Make column for quarter (1,2,3,4)**
df["Quarter"] = df["Date Recorded"].dt.quarter
# **it looks perfect now**
df.sample(3)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/036/129036266.ipynb
|
real-estate-sales-2001-2020-gl
|
utkarshx27
|
[{"Id": 129036266, "ScriptId": 38357389, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9663382, "CreationDate": "05/10/2023 13:28:41", "VersionNumber": 1.0, "Title": "\u23f2 Time series analysis", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 58.0, "LinesInsertedFromPrevious": 58.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 25}]
|
[{"Id": 184738905, "KernelVersionId": 129036266, "SourceDatasetVersionId": 5606108}]
|
[{"Id": 5606108, "DatasetId": 3224580, "DatasourceVersionId": 5681188, "CreatorUserId": 13364933, "LicenseName": "U.S. Government Works", "CreationDate": "05/05/2023 04:22:05", "VersionNumber": 1.0, "Title": "Connecticut Real Estate Sales Data", "Slug": "real-estate-sales-2001-2020-gl", "Subtitle": "Property Sales, Assessments, and Trends in Connecticut 2001 - 2020", "Description": "```\nThe Office of Policy and Management maintains a listing of all real estate sales with a sales price of $2,000 or greater that occur between October 1 and September 30 of each year. For each sale record, the file includes: town, property address, date of sale, property type (residential, apartment, commercial, industrial or vacant land), sales price, and property assessment.\n\nData are collected in accordance with Connecticut General Statutes, section 10-261a and 10-261b: https://www.cga.ct.gov/current/pub/chap_172.htm#sec_10-261a and https://www.cga.ct.gov/current/pub/chap_172.htm#sec_10-261b. Annual real estate sales are reported by grand list year (October 1 through September 30 each year). For instance, sales from 2018 GL are from 10/01/2018 through 9/30/2019.\n```\n| Column Name | Description |\n|-------------------|------------------------------------------------------------|\n| Serial Number | A unique identifier for each record in the dataset. |\n| List Year | The grand list year in which the sale was recorded. |\n| Date Recorded | The date when the sale was recorded. |\n| Town | The town where the property is located. |\n| Address | The address of the property. |\n| Assessed Value | The assessed value of the property. |\n| Sale Amount | The sales price of the property. |\n| Sales Ratio | The sales ratio of the property. |\n| Property Type | The type of the property (residential, apartment, commercial, industrial, or vacant land). |\n| Residential Type | The type of residential property (if applicable). |\n| Non Use Code | The non-use code associated with the property (if applicable). |\n| Assessor Remarks | Remarks or comments provided by the assessor (if available). |\n| OPM Remarks | Remarks or comments provided by the Office of Policy and Management (if available). |\n| Location | The location of the property (if available). |", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3224580, "CreatorUserId": 13364933, "OwnerUserId": 13364933.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5606108.0, "CurrentDatasourceVersionId": 5681188.0, "ForumId": 3289617, "Type": 2, "CreationDate": "05/05/2023 04:22:05", "LastActivityDate": "05/05/2023", "TotalViews": 6426, "TotalDownloads": 1241, "TotalVotes": 35, "TotalKernels": 1}]
|
[{"Id": 13364933, "UserName": "utkarshx27", "DisplayName": "Utkarsh Singh", "RegisterDate": "01/21/2023", "PerformanceTier": 2}]
|
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
df = pd.read_csv(
"/kaggle/input/real-estate-sales-2001-2020-gl/Real_Estate_Sales_2001-2020_GL.csv"
)
df.info()
# for numerical data
df.describe()
# for categorical data
df.describe(include="all")
df.sample(2)
df["OPM remarks"].value_counts()
df = df.drop("OPM remarks", axis=1)
df = df.dropna()
df.info()
df["Date Recorded"].values
# # **Time series analysis**
# **convert type (object) to datetime**
df["Date Recorded"] = pd.to_datetime(df["Date Recorded"])
df["Date Recorded"].values
# **Make column for year**
df["Year"] = df["Date Recorded"].dt.year
# **Make column for month name**
#
df["Month"] = df["Date Recorded"].dt.month_name()
# **Make column for day name**
df["Day"] = df["Date Recorded"].dt.day_name()
# **Make column for quarter (1,2,3,4)**
df["Quarter"] = df["Date Recorded"].dt.quarter
# **it looks perfect now**
df.sample(3)
| false | 1 | 347 | 25 | 920 | 347 |
||
129041903
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, StandardScaler
from sklearn.metrics import (
confusion_matrix,
roc_curve,
roc_auc_score,
accuracy_score,
precision_score,
recall_score,
f1_score,
)
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
# Load data
data = pd.read_csv("/kaggle/input/datas-req/natural prod toxic.csv")
# Drop rows with missing data
data = data.dropna()
# Drop rows with invalid SMILES
data = data.dropna()
# Label encode SMILES
le = LabelEncoder()
data["SMILES"] = le.fit_transform(data["SMILES"])
# Standardize features
scaler = StandardScaler()
X = data[["SMILES", "HBA", "HBD", "MW", "ROT", "logP", "TPSA"]]
data
df = data
df
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn import metrics
import shap
import catboost
from catboost import CatBoostClassifier, Pool
import numpy as np
import matplotlib
from sklearn.model_selection import train_test_split
from catboost import CatBoostClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import classification_report
from sklearn import svm, datasets
from sklearn.metrics import ConfusionMatrixDisplay
# df = pd.read_csv("kc3.csv")
# df['c'] = df['c'].replace(False, 0)
# df['c'] = df['c'].replace(True, 1)
# df=df.drop(['coconut_id'], axis=1)
print(f"Size of Dataset {df.shape}")
features = [feat for feat in list(df) if feat != "toxic"]
# print(features)
X_train, X_test, y_train, y_test = train_test_split(
df[features], df[["toxic"]], test_size=0.2, random_state=1
)
params = {"iterations": 2500, "verbose": False}
cat_model = CatBoostClassifier(**params)
cat_model.fit(X_train, y_train)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=cat_model, X=df[features], y=df[["toxic"]], cv=5)
print("Accuracy:{:.6f} %".format(accuracies.mean() * 100))
titles_options = [
("Confusion matrix, without normalization", None),
("Normalized confusion matrix", "true"),
]
for title, normalize in titles_options:
disp = ConfusionMatrixDisplay.from_estimator(
cat_model,
X_test,
y_test,
display_labels=["NoBug", "Bug"],
cmap=plt.cm.Blues,
normalize=normalize,
)
disp.ax_.set_title(title)
# print(title)
# print(disp.confusion_matrix)
plt.show()
# model = cat_model
# shap_values = cat_model.get_feature_importance(Pool(X_test, label=y_test) ,
# type="ShapValues")
# y_pred = model.predict(X_test)
# expected_value = shap_values[0,-1]
# shap_values = shap_values[:,:-1]
# shap.dependence_plot(features[0], shap_values, X_test)
# shap.initjs()
# explainer = shap.TreeExplainer(model)
# shap_values = explainer.shap_values(X_test)
# # shap.summary_plot(shap_values, X_test,plot_type = 'bar')
# shap_valuesnew = [-shap_values,shap_values]
# shap.summary_plot(shap_valuesnew, X_test,class_names= ["NoBug","Bug"]) # combined
# shap.summary_plot(shap_valuesnew[0], X_test, plot_type = 'violin')
# shap.summary_plot(shap_valuesnew[1], X_test, plot_type = 'violin')
# for i in features:
# shap.dependence_plot(i,shap_values,X_test)
# shap.force_plot(explainer.expected_value, shap_values, X_test)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/041/129041903.ipynb
| null | null |
[{"Id": 129041903, "ScriptId": 37511225, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13980349, "CreationDate": "05/10/2023 14:10:34", "VersionNumber": 1.0, "Title": "Natural_prod", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 148.0, "LinesInsertedFromPrevious": 148.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, StandardScaler
from sklearn.metrics import (
confusion_matrix,
roc_curve,
roc_auc_score,
accuracy_score,
precision_score,
recall_score,
f1_score,
)
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
# Load data
data = pd.read_csv("/kaggle/input/datas-req/natural prod toxic.csv")
# Drop rows with missing data
data = data.dropna()
# Drop rows with invalid SMILES
data = data.dropna()
# Label encode SMILES
le = LabelEncoder()
data["SMILES"] = le.fit_transform(data["SMILES"])
# Standardize features
scaler = StandardScaler()
X = data[["SMILES", "HBA", "HBD", "MW", "ROT", "logP", "TPSA"]]
data
df = data
df
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn import metrics
import shap
import catboost
from catboost import CatBoostClassifier, Pool
import numpy as np
import matplotlib
from sklearn.model_selection import train_test_split
from catboost import CatBoostClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import classification_report
from sklearn import svm, datasets
from sklearn.metrics import ConfusionMatrixDisplay
# df = pd.read_csv("kc3.csv")
# df['c'] = df['c'].replace(False, 0)
# df['c'] = df['c'].replace(True, 1)
# df=df.drop(['coconut_id'], axis=1)
print(f"Size of Dataset {df.shape}")
features = [feat for feat in list(df) if feat != "toxic"]
# print(features)
X_train, X_test, y_train, y_test = train_test_split(
df[features], df[["toxic"]], test_size=0.2, random_state=1
)
params = {"iterations": 2500, "verbose": False}
cat_model = CatBoostClassifier(**params)
cat_model.fit(X_train, y_train)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=cat_model, X=df[features], y=df[["toxic"]], cv=5)
print("Accuracy:{:.6f} %".format(accuracies.mean() * 100))
titles_options = [
("Confusion matrix, without normalization", None),
("Normalized confusion matrix", "true"),
]
for title, normalize in titles_options:
disp = ConfusionMatrixDisplay.from_estimator(
cat_model,
X_test,
y_test,
display_labels=["NoBug", "Bug"],
cmap=plt.cm.Blues,
normalize=normalize,
)
disp.ax_.set_title(title)
# print(title)
# print(disp.confusion_matrix)
plt.show()
# model = cat_model
# shap_values = cat_model.get_feature_importance(Pool(X_test, label=y_test) ,
# type="ShapValues")
# y_pred = model.predict(X_test)
# expected_value = shap_values[0,-1]
# shap_values = shap_values[:,:-1]
# shap.dependence_plot(features[0], shap_values, X_test)
# shap.initjs()
# explainer = shap.TreeExplainer(model)
# shap_values = explainer.shap_values(X_test)
# # shap.summary_plot(shap_values, X_test,plot_type = 'bar')
# shap_valuesnew = [-shap_values,shap_values]
# shap.summary_plot(shap_valuesnew, X_test,class_names= ["NoBug","Bug"]) # combined
# shap.summary_plot(shap_valuesnew[0], X_test, plot_type = 'violin')
# shap.summary_plot(shap_valuesnew[1], X_test, plot_type = 'violin')
# for i in features:
# shap.dependence_plot(i,shap_values,X_test)
# shap.force_plot(explainer.expected_value, shap_values, X_test)
| false | 0 | 1,318 | 0 | 1,318 | 1,318 |
||
129951563
|
<jupyter_start><jupyter_text>Top 10000 popular Movies TMDB
This is a collection of metadata about the top 10,000 most popular movies on **The Movie Database (TMDB)** as of May 2023. The dataset includes information such as movie titles, release dates, runtime, genres, production companies, budget, and revenue. This data is collected from TMDB's public [API](https://developer.themoviedb.org/docs).
#### Little bit about [TMDB](https://www.themoviedb.org/)
TMDB (The Movie Database) is a popular online database and community platform that provides a vast collection of information about movies, TV shows, and other related content. TMDB allows users to browse and search for movies and TV shows, view information such as cast, crew, synopsis, and ratings, and also contribute to the community by adding their own reviews, ratings, and other content.
#### Purpose
The dataset is intended for use by data analysts, researchers, and developers who are interested in studying or analyzing the popularity and characteristics of movies. The dataset can be used to perform a wide range of analyses, such as exploring trends in movie genres over time, identifying patterns in movie budgets and revenues, and analyzing the impact of different attributes on a movie's popularity.
####Attributes
- **id**: Unique identifier assigned to each movie in the TMDB database.
- **title**: Title of the movie.
- **release_date**: Date on which the movie was released.
- **genres**: List of genres associated with the movie.
- **original_language**: Language in which the movie was originally produced.
- **vote_average**: Average rating given to the movie by TMDB users.
- **vote_count**: Number of votes cast for the movie on TMDB.
- **popularity**: Popularity score assigned to the movie by TMDB based on user engagement.
- **overview**: Brief description or synopsis of the movie.
- **budget**: Estimated budget for producing the movie in USD.
- **production_companies**: List of production companies involved in making the movie.
- **revenue**: Total revenue generated by the movie in USD.
- **runtime**: Total runtime of the movie in minutes.
- **tagline**: Short, memorable phrase associated with the movie, often used in promotional material.
#### [Dataset Creation](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook)
The dataset mentioned has been created by fetching raw data from TMDB's public API, and then cleaning and preprocessing the data to improve its quality and make it easier to work with. The cleaning process has been done using a notebook available [here](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook), which outlines the steps taken to transform the raw data into a more usable format.
Kaggle dataset identifier: top-10000-popular-movies-tmdb-05-2023
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv(
"/kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv"
)
df
# # top 10 popular movie of the dataset
top_10 = df[["title", "popularity"]]
top_10 = top_10.sort_values(["popularity"], ascending=False)
top_10.head(10)
# # the average of popularity for each language
most_popularity_lng = df.groupby("original_language")["popularity"].mean()
# most_popularity_lng = df[['original_language',"popularity"]]
most_popularity_lng
# # Top 5 selling movie
top_5_selling_movie = df[["title", "revenue"]]
top_5_selling_movie = top_5_selling_movie.sort_values(["revenue"], ascending=False)
top_5_selling_movie = top_5_selling_movie.head(5)
top_5_selling_movie
# # Verbalize your insights in Markdown cells.
# ## Top 10,000 Popular Movies
# This dataset provides information about the top 10,000 popular movies as of May 2023, sourced from TMDB (The Movie Database). Let's explore some insights from this dataset:
# Movie Genres The dataset includes a wide range of movie genres. By analyzing the genre distribution, we can observe the popularity of different genres among the top 10,000 movies. It would be interesting to see which genres are most prevalent and if there are any emerging trends in movie preferences.
# Ratings and Reviews The dataset likely contains ratings and reviews for the movies, which can be used to evaluate the overall reception of these films. We can analyze the average ratings and sentiments expressed in the reviews to identify the most well-received movies among the top 10,000.
# Box Office Performance Movies that make it to the top 10,000 popular list often have significant box office success. By exploring the dataset, we can gather information on the worldwide and domestic box office earnings for these movies. It would be fascinating to examine the correlation between a film's popularity and its financial performance.
# Movie Directors and Cast Identifying the directors and cast members associated with the top 10,000 movies can provide insights into popular trends in the film industry. We can determine if specific directors or actors/actresses are more frequently associated with successful movies and explore any patterns or preferences among the filmmakers and actors involved.
# Release Year Distribution Analyzing the distribution of movie release years in the dataset can help us understand if there are any temporal patterns or preferences among the top 10,000 popular movies. We can identify if recent releases dominate the list or if there are notable classics that continue to maintain their popularity over time.
# Movie Runtimes Examining the movie runtimes can give us an idea of the preferred duration among the top 10,000 movies. We can analyze the distribution of runtimes and identify any trends or patterns in movie length. This insight could help filmmakers and studios understand audience preferences when it comes to movie duration.
# Language Diversity By analyzing the languages of the top 10,000 movies, we can gain insights into the diversity and distribution of films from different regions. It would be interesting to identify which languages are most prevalent and if there are any emerging international cinema trends.
# Production Companies Exploring the production companies associated with the top 10,000 movies can reveal patterns in successful collaborations. We can identify if certain production companies are consistently associated with popular movies and analyze any relationships between production companies and film success.
# These insights provide a starting point for exploring the dataset of the top 10,000 popular movies from TMDB in May 2023. By diving deeper into these aspects, we can gain a better understanding of the movie industry's current trends, preferences, and patterns.
import seaborn as sns
import matplotlib.pyplot as plt
# # visualization of the top 5 popular movies
top_10 = top_10.head(5)
plt.figure()
sns.barplot(x="popularity", y="title", data=top_10, palette="viridis")
plt.title("top 5 popular movies")
plt.xlabel("popularity")
plt.ylabel("title")
plt.show()
# # visualization of The Average of popularity for each language
plt.figure(figsize=(20, 30))
sns.barplot(x=most_popularity_lng.values, y=most_popularity_lng.index, data=top_10)
plt.title("The Average of popularity for each language")
plt.xlabel("Avg")
plt.ylabel("Language")
plt.show()
# # visualization the top 10 selling movies
plt.figure(figsize=(20, 10))
sns.barplot(x="revenue", y="title", data=top_5_selling_movie)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/951/129951563.ipynb
|
top-10000-popular-movies-tmdb-05-2023
|
ursmaheshj
|
[{"Id": 129951563, "ScriptId": 38608636, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14982285, "CreationDate": "05/17/2023 16:43:00", "VersionNumber": 1.0, "Title": "top_10000_movies", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 90.0, "LinesInsertedFromPrevious": 90.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186383825, "KernelVersionId": 129951563, "SourceDatasetVersionId": 5643863}, {"Id": 186383824, "KernelVersionId": 129951563, "SourceDatasetVersionId": 1094}]
|
[{"Id": 5643863, "DatasetId": 3240464, "DatasourceVersionId": 5719190, "CreatorUserId": 7397148, "LicenseName": "CC0: Public Domain", "CreationDate": "05/09/2023 13:43:53", "VersionNumber": 4.0, "Title": "Top 10000 popular Movies TMDB", "Slug": "top-10000-popular-movies-tmdb-05-2023", "Subtitle": "A Comprehensive Collection of Metadata for the Top 10,000 Popular Movies on TMDB", "Description": "This is a collection of metadata about the top 10,000 most popular movies on **The Movie Database (TMDB)** as of May 2023. The dataset includes information such as movie titles, release dates, runtime, genres, production companies, budget, and revenue. This data is collected from TMDB's public [API](https://developer.themoviedb.org/docs). \n\n#### Little bit about [TMDB](https://www.themoviedb.org/)\nTMDB (The Movie Database) is a popular online database and community platform that provides a vast collection of information about movies, TV shows, and other related content. TMDB allows users to browse and search for movies and TV shows, view information such as cast, crew, synopsis, and ratings, and also contribute to the community by adding their own reviews, ratings, and other content.\n\n#### Purpose\nThe dataset is intended for use by data analysts, researchers, and developers who are interested in studying or analyzing the popularity and characteristics of movies. The dataset can be used to perform a wide range of analyses, such as exploring trends in movie genres over time, identifying patterns in movie budgets and revenues, and analyzing the impact of different attributes on a movie's popularity.\n\n####Attributes\n- **id**: Unique identifier assigned to each movie in the TMDB database.\n- **title**: Title of the movie.\n- **release_date**: Date on which the movie was released.\n- **genres**: List of genres associated with the movie.\n- **original_language**: Language in which the movie was originally produced.\n- **vote_average**: Average rating given to the movie by TMDB users.\n- **vote_count**: Number of votes cast for the movie on TMDB.\n- **popularity**: Popularity score assigned to the movie by TMDB based on user engagement.\n- **overview**: Brief description or synopsis of the movie.\n- **budget**: Estimated budget for producing the movie in USD.\n- **production_companies**: List of production companies involved in making the movie.\n- **revenue**: Total revenue generated by the movie in USD.\n- **runtime**: Total runtime of the movie in minutes.\n- **tagline**: Short, memorable phrase associated with the movie, often used in promotional material.\n\n#### [Dataset Creation](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook)\nThe dataset mentioned has been created by fetching raw data from TMDB's public API, and then cleaning and preprocessing the data to improve its quality and make it easier to work with. The cleaning process has been done using a notebook available [here](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook), which outlines the steps taken to transform the raw data into a more usable format.", "VersionNotes": "Data Update 2023-05-09", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3240464, "CreatorUserId": 7397148, "OwnerUserId": 7397148.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5921776.0, "CurrentDatasourceVersionId": 5999208.0, "ForumId": 3305699, "Type": 2, "CreationDate": "05/08/2023 19:50:26", "LastActivityDate": "05/08/2023", "TotalViews": 7400, "TotalDownloads": 1454, "TotalVotes": 37, "TotalKernels": 10}]
|
[{"Id": 7397148, "UserName": "ursmaheshj", "DisplayName": "Mahesh Jadhav", "RegisterDate": "05/11/2021", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv(
"/kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv"
)
df
# # top 10 popular movie of the dataset
top_10 = df[["title", "popularity"]]
top_10 = top_10.sort_values(["popularity"], ascending=False)
top_10.head(10)
# # the average of popularity for each language
most_popularity_lng = df.groupby("original_language")["popularity"].mean()
# most_popularity_lng = df[['original_language',"popularity"]]
most_popularity_lng
# # Top 5 selling movie
top_5_selling_movie = df[["title", "revenue"]]
top_5_selling_movie = top_5_selling_movie.sort_values(["revenue"], ascending=False)
top_5_selling_movie = top_5_selling_movie.head(5)
top_5_selling_movie
# # Verbalize your insights in Markdown cells.
# ## Top 10,000 Popular Movies
# This dataset provides information about the top 10,000 popular movies as of May 2023, sourced from TMDB (The Movie Database). Let's explore some insights from this dataset:
# Movie Genres The dataset includes a wide range of movie genres. By analyzing the genre distribution, we can observe the popularity of different genres among the top 10,000 movies. It would be interesting to see which genres are most prevalent and if there are any emerging trends in movie preferences.
# Ratings and Reviews The dataset likely contains ratings and reviews for the movies, which can be used to evaluate the overall reception of these films. We can analyze the average ratings and sentiments expressed in the reviews to identify the most well-received movies among the top 10,000.
# Box Office Performance Movies that make it to the top 10,000 popular list often have significant box office success. By exploring the dataset, we can gather information on the worldwide and domestic box office earnings for these movies. It would be fascinating to examine the correlation between a film's popularity and its financial performance.
# Movie Directors and Cast Identifying the directors and cast members associated with the top 10,000 movies can provide insights into popular trends in the film industry. We can determine if specific directors or actors/actresses are more frequently associated with successful movies and explore any patterns or preferences among the filmmakers and actors involved.
# Release Year Distribution Analyzing the distribution of movie release years in the dataset can help us understand if there are any temporal patterns or preferences among the top 10,000 popular movies. We can identify if recent releases dominate the list or if there are notable classics that continue to maintain their popularity over time.
# Movie Runtimes Examining the movie runtimes can give us an idea of the preferred duration among the top 10,000 movies. We can analyze the distribution of runtimes and identify any trends or patterns in movie length. This insight could help filmmakers and studios understand audience preferences when it comes to movie duration.
# Language Diversity By analyzing the languages of the top 10,000 movies, we can gain insights into the diversity and distribution of films from different regions. It would be interesting to identify which languages are most prevalent and if there are any emerging international cinema trends.
# Production Companies Exploring the production companies associated with the top 10,000 movies can reveal patterns in successful collaborations. We can identify if certain production companies are consistently associated with popular movies and analyze any relationships between production companies and film success.
# These insights provide a starting point for exploring the dataset of the top 10,000 popular movies from TMDB in May 2023. By diving deeper into these aspects, we can gain a better understanding of the movie industry's current trends, preferences, and patterns.
import seaborn as sns
import matplotlib.pyplot as plt
# # visualization of the top 5 popular movies
top_10 = top_10.head(5)
plt.figure()
sns.barplot(x="popularity", y="title", data=top_10, palette="viridis")
plt.title("top 5 popular movies")
plt.xlabel("popularity")
plt.ylabel("title")
plt.show()
# # visualization of The Average of popularity for each language
plt.figure(figsize=(20, 30))
sns.barplot(x=most_popularity_lng.values, y=most_popularity_lng.index, data=top_10)
plt.title("The Average of popularity for each language")
plt.xlabel("Avg")
plt.ylabel("Language")
plt.show()
# # visualization the top 10 selling movies
plt.figure(figsize=(20, 10))
sns.barplot(x="revenue", y="title", data=top_5_selling_movie)
| false | 1 | 1,392 | 0 | 2,091 | 1,392 |
||
129951819
|
<jupyter_start><jupyter_text>Fraud Detection in Electricity and Gas Consumption
The Tunisian Company of Electricity and Gas (STEG) is a public and a non-administrative company, it is responsible for delivering electricity and gas across Tunisia. The company suffered tremendous losses in the order of 200 million Tunisian Dinars due to fraudulent manipulations of meters by consumers.
Kaggle dataset identifier: fraud-detection-in-electricity-and-gas-consumption
<jupyter_script># ## Load necessary packages
# Ignore Warnings
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
invoice_test = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/invoice_test.csv",
low_memory=False,
)
invoice_train = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/invoice_train.csv",
low_memory=False,
)
client_test = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/client_test.csv",
low_memory=False,
)
client_train = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/client_train.csv",
low_memory=False,
)
sample_submission = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/SampleSubmission (2).csv",
low_memory=False,
)
# compare size of the various datasets
print(client_train.shape, invoice_train.shape, client_test.shape, invoice_train.shape)
client_train.head()
invoice_train.head()
# ## Exploratory data analysis (EDA)
L = pd.to_datetime(client_train["creation_date"], dayfirst=True)
client_train["creation_year"] = L.dt.year
years = set(L.dt.year)
# ### Invoice counter type (ELEC, GAZ)
C = invoice_train["counter_type"].tolist()
elec = C.count("ELEC") * 100 / len(C)
gaz = C.count("GAZ") * 100 / len(C)
plt.figure(figsize=(6, 6))
plt.pie([elec, gaz], labels=["ELEC", "GAZ"], autopct="%1.1f%%")
plt.title("Proportion of Counter type (ELEC to GAZ)")
plt.show()
year = client_train.groupby(["creation_year"])["client_id"].count()
plt.figure(figsize=(12, 6)) # increase figure size to make it larger
plt.plot(year)
plt.title("Number of Clients by Creation Year")
plt.xlabel("Creation Year")
plt.ylabel("Number of Clients")
# set x-axis tick labels to show every 5 years
plt.xticks(range(min(year.index), max(year.index) + 1, 5), rotation=45)
plt.show()
E1 = [i for i in years]
groups = client_train.groupby(["creation_year", "client_catg"])["client_id"].count()
L11 = []
L12 = []
L51 = []
for i in years:
L11.append(groups[i][11])
L12.append(groups[i][12])
L51.append(groups[i][51])
fig, ax = plt.subplots()
fig.set_size_inches(10, 5)
ax.plot(E1, L11, label="cat_11")
ax.plot(E1, L12, label="cat_12")
ax.plot(E1, L51, label="cat_51")
plt.title("Number of customers by year")
plt.legend()
plt.show()
# " Logarithmic plot "
logL11 = list(map(np.log, L11))
logL12 = list(map(np.log, L12))
logL51 = list(map(np.log, L51))
fig, ax = plt.subplots()
fig.set_size_inches(10, 5)
ax.plot(E1, logL11, label="cat_11")
ax.plot(E1, logL12, label="cat_12")
ax.plot(E1, logL51, label="cat_51")
plt.title("Logarithmic number of customers by year")
plt.legend()
plt.show()
ds = client_train.groupby(["target"])["client_id"].count()
plt.bar(x=ds.index, height=ds.values, tick_label=[0, 1])
plt.title("target distribution")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/951/129951819.ipynb
|
fraud-detection-in-electricity-and-gas-consumption
|
mrmorj
|
[{"Id": 129951819, "ScriptId": 38653004, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9028201, "CreationDate": "05/17/2023 16:45:11", "VersionNumber": 1.0, "Title": "Energy EDA and Prediction\u26a1", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 99.0, "LinesInsertedFromPrevious": 99.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186384170, "KernelVersionId": 129951819, "SourceDatasetVersionId": 1439341}]
|
[{"Id": 1439341, "DatasetId": 678596, "DatasourceVersionId": 1472785, "CreatorUserId": 3511431, "LicenseName": "Other (specified in description)", "CreationDate": "08/24/2020 12:29:16", "VersionNumber": 2.0, "Title": "Fraud Detection in Electricity and Gas Consumption", "Slug": "fraud-detection-in-electricity-and-gas-consumption", "Subtitle": "Client\u2019s billing history", "Description": "The Tunisian Company of Electricity and Gas (STEG) is a public and a non-administrative company, it is responsible for delivering electricity and gas across Tunisia. The company suffered tremendous losses in the order of 200 million Tunisian Dinars due to fraudulent manipulations of meters by consumers.", "VersionNotes": "Fraud Detection in Electricity and Gas Consumption", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 678596, "CreatorUserId": 3511431, "OwnerUserId": 3511431.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1439341.0, "CurrentDatasourceVersionId": 1472785.0, "ForumId": 693128, "Type": 2, "CreationDate": "05/27/2020 16:59:49", "LastActivityDate": "05/27/2020", "TotalViews": 24769, "TotalDownloads": 1878, "TotalVotes": 46, "TotalKernels": 7}]
|
[{"Id": 3511431, "UserName": "mrmorj", "DisplayName": "Andrii Samoshyn", "RegisterDate": "07/26/2019", "PerformanceTier": 2}]
|
# ## Load necessary packages
# Ignore Warnings
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
invoice_test = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/invoice_test.csv",
low_memory=False,
)
invoice_train = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/invoice_train.csv",
low_memory=False,
)
client_test = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/client_test.csv",
low_memory=False,
)
client_train = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/client_train.csv",
low_memory=False,
)
sample_submission = pd.read_csv(
"/kaggle/input/fraud-detection-in-electricity-and-gas-consumption/SampleSubmission (2).csv",
low_memory=False,
)
# compare size of the various datasets
print(client_train.shape, invoice_train.shape, client_test.shape, invoice_train.shape)
client_train.head()
invoice_train.head()
# ## Exploratory data analysis (EDA)
L = pd.to_datetime(client_train["creation_date"], dayfirst=True)
client_train["creation_year"] = L.dt.year
years = set(L.dt.year)
# ### Invoice counter type (ELEC, GAZ)
C = invoice_train["counter_type"].tolist()
elec = C.count("ELEC") * 100 / len(C)
gaz = C.count("GAZ") * 100 / len(C)
plt.figure(figsize=(6, 6))
plt.pie([elec, gaz], labels=["ELEC", "GAZ"], autopct="%1.1f%%")
plt.title("Proportion of Counter type (ELEC to GAZ)")
plt.show()
year = client_train.groupby(["creation_year"])["client_id"].count()
plt.figure(figsize=(12, 6)) # increase figure size to make it larger
plt.plot(year)
plt.title("Number of Clients by Creation Year")
plt.xlabel("Creation Year")
plt.ylabel("Number of Clients")
# set x-axis tick labels to show every 5 years
plt.xticks(range(min(year.index), max(year.index) + 1, 5), rotation=45)
plt.show()
E1 = [i for i in years]
groups = client_train.groupby(["creation_year", "client_catg"])["client_id"].count()
L11 = []
L12 = []
L51 = []
for i in years:
L11.append(groups[i][11])
L12.append(groups[i][12])
L51.append(groups[i][51])
fig, ax = plt.subplots()
fig.set_size_inches(10, 5)
ax.plot(E1, L11, label="cat_11")
ax.plot(E1, L12, label="cat_12")
ax.plot(E1, L51, label="cat_51")
plt.title("Number of customers by year")
plt.legend()
plt.show()
# " Logarithmic plot "
logL11 = list(map(np.log, L11))
logL12 = list(map(np.log, L12))
logL51 = list(map(np.log, L51))
fig, ax = plt.subplots()
fig.set_size_inches(10, 5)
ax.plot(E1, logL11, label="cat_11")
ax.plot(E1, logL12, label="cat_12")
ax.plot(E1, logL51, label="cat_51")
plt.title("Logarithmic number of customers by year")
plt.legend()
plt.show()
ds = client_train.groupby(["target"])["client_id"].count()
plt.bar(x=ds.index, height=ds.values, tick_label=[0, 1])
plt.title("target distribution")
plt.show()
| false | 5 | 1,145 | 0 | 1,269 | 1,145 |
||
129951496
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import matplotlib.dates as mdates
import seaborn as sns # visualization
df = pd.read_csv("../input/walmart-recruiting-store-sales-forecasting/train.csv.zip")
df
df["Month"] = pd.to_datetime(df["Date"]).dt.month
df["Month"].unique()
df
sales_by_month = df.groupby("Month")["Weekly_Sales"].sum()
# Plot the data using a line chart
import matplotlib.pyplot as plt
plt.plot(sales_by_month.index, sales_by_month.values)
plt.xticks(range(1, 13)) # Set the x-tick labels to show 1 to 12
plt.xlabel("Month")
plt.ylabel("Total Weekly Sales")
plt.title("Weekly Sales by Month")
plt.show()
store_sales = df.groupby(["Store"]).sum()
# Create a bar chart using Matplotlib
fig, ax = plt.subplots(figsize=(20, 10))
ax.bar(store_sales.index, store_sales["Weekly_Sales"], color="orange")
# Add labels and a title to the chart
plt.xticks(range(1, 46))
ax.set_xlabel("Stores")
ax.set_ylabel("Weekly Sales")
ax.set_title("Sales of stores")
# Display the chart
plt.show()
sales_by_store = df.groupby("Store")["Weekly_Sales"].sum()
top_stores = sales_by_store.sort_values(ascending=False).head(10)
display(top_stores)
fig, ax = plt.subplots(figsize=(20, 10))
ax.bar(top_stores.index, top_stores.values, color="orange")
# Add labels and a title to the chart
plt.xticks(range(1, 46))
ax.set_xlabel("Stores")
ax.set_ylabel("Weekly Sales")
ax.set_title("Sales of stores")
# Display the chart
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/951/129951496.ipynb
| null | null |
[{"Id": 129951496, "ScriptId": 38644919, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8219936, "CreationDate": "05/17/2023 16:42:20", "VersionNumber": 1.0, "Title": "intelligent database project", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 82.0, "LinesInsertedFromPrevious": 82.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import matplotlib.dates as mdates
import seaborn as sns # visualization
df = pd.read_csv("../input/walmart-recruiting-store-sales-forecasting/train.csv.zip")
df
df["Month"] = pd.to_datetime(df["Date"]).dt.month
df["Month"].unique()
df
sales_by_month = df.groupby("Month")["Weekly_Sales"].sum()
# Plot the data using a line chart
import matplotlib.pyplot as plt
plt.plot(sales_by_month.index, sales_by_month.values)
plt.xticks(range(1, 13)) # Set the x-tick labels to show 1 to 12
plt.xlabel("Month")
plt.ylabel("Total Weekly Sales")
plt.title("Weekly Sales by Month")
plt.show()
store_sales = df.groupby(["Store"]).sum()
# Create a bar chart using Matplotlib
fig, ax = plt.subplots(figsize=(20, 10))
ax.bar(store_sales.index, store_sales["Weekly_Sales"], color="orange")
# Add labels and a title to the chart
plt.xticks(range(1, 46))
ax.set_xlabel("Stores")
ax.set_ylabel("Weekly Sales")
ax.set_title("Sales of stores")
# Display the chart
plt.show()
sales_by_store = df.groupby("Store")["Weekly_Sales"].sum()
top_stores = sales_by_store.sort_values(ascending=False).head(10)
display(top_stores)
fig, ax = plt.subplots(figsize=(20, 10))
ax.bar(top_stores.index, top_stores.values, color="orange")
# Add labels and a title to the chart
plt.xticks(range(1, 46))
ax.set_xlabel("Stores")
ax.set_ylabel("Weekly Sales")
ax.set_title("Sales of stores")
# Display the chart
plt.show()
| false | 0 | 684 | 0 | 684 | 684 |
||
129926625
|
<jupyter_start><jupyter_text>World Population Insights: 1970-2022
This dataset provides comprehensive information on global population dynamics. It includes attributes such as rank, country details, capital, continent, and population data from various years. Additional details like area, density, growth rate, and world population percentage are also included. This dataset allows for insightful analysis of worldwide demographic trends and patterns.
Kaggle dataset identifier: world-population-insights-1970-2022
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
df = pd.read_csv("/kaggle/input/world-population-insights-1970-2022/population.csv")
df.shape
df.info()
df.describe()
df.isna().sum()
df.head()
years = []
names = []
sums = []
for i in df.columns:
if i[0] == "1" or i[0] == "2":
names += [i]
years += [int(i.split()[0])]
sums += [np.sum(df[i])]
names
# # Growth throughout years
fig = px.line(x=names[::-1], y=sums[::-1])
fig.update_layout(xaxis_title="Years", yaxis_title="Population in Billions")
def plots(df, x, y, mean=False):
group_data = df.groupby(y)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(25, 10))
bars = group_data[x].sum() if not mean else group_data[x].mean()
sns.barplot(x=bars.index, y=bars, ax=axes[0])
for container in axes[0].containers:
axes[0].bar_label(container, size=15, color="black")
sns.histplot(df, x=x, hue=y, kde=True, ax=axes[1])
if not mean:
plt.suptitle("{} by {}".format(x, y), size=20)
else:
plt.suptitle("Mean {} by {}".format(x, y), size=20)
plt.show()
plt.pie(
df["Continent"].value_counts(),
labels=df["Continent"].value_counts().index,
autopct="%0.2f%%",
)
df["Continent"].value_counts()
# # Grouped data's barplots and Histplots
for i in df.columns[6:14]:
plots(df, i, "Continent")
for i in df.columns[14:]:
plots(df, i, "Continent", True)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/926/129926625.ipynb
|
world-population-insights-1970-2022
|
gyaswanth297
|
[{"Id": 129926625, "ScriptId": 38619311, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11036701, "CreationDate": "05/17/2023 13:27:46", "VersionNumber": 1.0, "Title": "notebookec785e1b6a", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 71.0, "LinesInsertedFromPrevious": 71.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
|
[{"Id": 186349932, "KernelVersionId": 129926625, "SourceDatasetVersionId": 5700066}]
|
[{"Id": 5700066, "DatasetId": 3277670, "DatasourceVersionId": 5775728, "CreatorUserId": 11623113, "LicenseName": "Unknown", "CreationDate": "05/16/2023 16:25:59", "VersionNumber": 1.0, "Title": "World Population Insights: 1970-2022", "Slug": "world-population-insights-1970-2022", "Subtitle": "Global Population Trends: Exploring the Changing Dynamics of People Worldwide", "Description": "This dataset provides comprehensive information on global population dynamics. It includes attributes such as rank, country details, capital, continent, and population data from various years. Additional details like area, density, growth rate, and world population percentage are also included. This dataset allows for insightful analysis of worldwide demographic trends and patterns.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3277670, "CreatorUserId": 11623113, "OwnerUserId": 11623113.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5700066.0, "CurrentDatasourceVersionId": 5775728.0, "ForumId": 3343365, "Type": 2, "CreationDate": "05/16/2023 16:25:59", "LastActivityDate": "05/16/2023", "TotalViews": 5641, "TotalDownloads": 1200, "TotalVotes": 35, "TotalKernels": 4}]
|
[{"Id": 11623113, "UserName": "gyaswanth297", "DisplayName": "gYaswanth297", "RegisterDate": "09/17/2022", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
df = pd.read_csv("/kaggle/input/world-population-insights-1970-2022/population.csv")
df.shape
df.info()
df.describe()
df.isna().sum()
df.head()
years = []
names = []
sums = []
for i in df.columns:
if i[0] == "1" or i[0] == "2":
names += [i]
years += [int(i.split()[0])]
sums += [np.sum(df[i])]
names
# # Growth throughout years
fig = px.line(x=names[::-1], y=sums[::-1])
fig.update_layout(xaxis_title="Years", yaxis_title="Population in Billions")
def plots(df, x, y, mean=False):
group_data = df.groupby(y)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(25, 10))
bars = group_data[x].sum() if not mean else group_data[x].mean()
sns.barplot(x=bars.index, y=bars, ax=axes[0])
for container in axes[0].containers:
axes[0].bar_label(container, size=15, color="black")
sns.histplot(df, x=x, hue=y, kde=True, ax=axes[1])
if not mean:
plt.suptitle("{} by {}".format(x, y), size=20)
else:
plt.suptitle("Mean {} by {}".format(x, y), size=20)
plt.show()
plt.pie(
df["Continent"].value_counts(),
labels=df["Continent"].value_counts().index,
autopct="%0.2f%%",
)
df["Continent"].value_counts()
# # Grouped data's barplots and Histplots
for i in df.columns[6:14]:
plots(df, i, "Continent")
for i in df.columns[14:]:
plots(df, i, "Continent", True)
| false | 1 | 566 | 4 | 678 | 566 |
||
129926915
|
# # Projeto Final
# - Curso EBAC
# ## 1\. Apresentação
# A intenção deste projeto é fazer a exploração, manipulação, limpeza e visualização de dados disponibilizados no curso da EBAC, para desta forma, testar nossos conhecimentos e identificar nossa aptidez com a ferramenta google colab, além de nossa habilidade com a linguagem python exercida até o momento.
# ### 1.1. Descrição
# Os dados que serão submetidos a análise estão neste [link](https://raw.githubusercontent.com/andre-marcos-perez/ebac-course-utils/develop/dataset/credito.csv), este arquivo está em formato csv e nele sse encontra informações de cliente de uma instituição financeira, como salário, sexo, idade, tipo do cartão, entre outros. Cabe ao objetivo deste projeto fazer a análise destes dados e saber o por quê de um cliente não pagar suas dívidas baseando neste estudo. Este dado será na forma de 0 ou 1 na coluna default, sendo que, 0 = adimplente ou 1= inadimplente.
# Podemos ver que seria inacessível analisarmos este dado como em sua forma orginal devido sua quantidade de colunas e linhas, por isso, iremos deixá-lo o mais simples possível para uma análise mais efetiva.
# ```
# -> As bibliotecas usadas são:
# pandas==1.5.3
# seaborn==0.12.2
# matplotlib==3.7.1
# ```
# ## 2\. Exploração de Dados
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv(
"/kaggle/input/adimplente-inadimplente/Python_M10_support material (4).csv",
na_values="na",
)
df.head(n=5)
# Para termos uma noção da quantidade de dados que estamos lidando usamos o código:
#
df.shape
# Ou seja, estamos lidando com 10127 linhas e 16 colunas.
# Para filtrarmos e sabermos a proporção em números brutos destes clientes quais são adimplentes e inadimplentes:
df[df["default"] == 0].shape
df[df["default"] == 1].shape
# São 8500 clientes adimplentes e 1627 inadimplentes.
qtd_total, _ = df.shape
qtd_adimplentes, _ = df[df["default"] == 0].shape
qtd_inadimplentes, _ = df[df["default"] == 1].shape
print(
f"A proporcão clientes adimplentes é de {round(100 * qtd_adimplentes / qtd_total, 2)}%"
)
print(
f"A proporcão clientes inadimplentes é de {round(100 * qtd_inadimplentes / qtd_total, 2)}%"
)
df.dtypes
# Para sabermos se esão faltando dados usaremos o seguinte código:
df.select_dtypes("object").describe().transpose()
# Podemos observar na coluna acima 2 problemas:
# 1- Que os itens escolaridade, estado civil e salário anual não estão coerentes com a quantidade total de linhas, ou seja, possuímos neste dataframe alguns dados faltando.
# 2- O fato de que os itens limite de crédito e valor das transações estão se encaixando como formato objeto.
df.drop("id", axis=1).select_dtypes("number").describe().transpose()
# No caso das colunas numéricas podemos observar que não há falta de informmação, pois a quantidade de itens na coluna "count" se alinham com a quantidade de linhas totais.
# Para começarmos a filtrar os dados, precisamos primeiramente de retirar os dados faltantes.
df.isna().any()
# Com estes dados coinfirmamos que no caso da escolaridade, estado civil e
# salario_anual estão faltando informações.
def stats_dados_faltantes(df: pd.DataFrame) -> None:
stats_dados_faltantes = []
for col in df.columns:
if df[col].isna().any():
qtd, _ = df[df[col].isna()].shape
total, _ = df.shape
dict_dados_faltantes = {
col: {"quantidade": qtd, "porcentagem": round(100 * qtd / total, 2)}
}
stats_dados_faltantes.append(dict_dados_faltantes)
for stat in stats_dados_faltantes:
print(stat)
stats_dados_faltantes(df=df)
stats_dados_faltantes(df=df[df["default"] == 0])
stats_dados_faltantes(df=df[df["default"] == 1])
# Por meio deste último dois códigos podemos obsservar que apesar da nossa base de dados terem mais pessoas adimplentes que inadimplentes, os dados faltantes são relativamente proporcionais a ambas categorias, isso nos deixa excluir linhas mais tranquilamente.
# ### 2.1. Correção de valores
# Vamos corrigir o problema dos dados limite de credito e valor das transações se encaixarem como objeto e não como números.
# Isso se dá pelo fato do python não identificar vírugulas como um separador de partes inteiras e decimais além de não identificar como separador de casa de milhares para centenas.
fn = lambda valor: float(valor.replace(".", "").replace(",", "."))
df["valor_transacoes_12m"] = df["valor_transacoes_12m"].apply(fn)
df["limite_credito"] = df["limite_credito"].apply(fn)
# Para termos certeza se a função alcançou todos os dados e fez a conversão corretamente vamos ver os tipos de dados presente de novo.
df.dtypes
# ### 2.2. Limpeza de Dados
df = df.dropna()
# Neste código acima nós retiramos todos os dados vazios.
df.shape
qtd_total_novo, _ = df.shape
qtd_adimplentes_novo, _ = df[df["default"] == 0].shape
qtd_inadimplentes_novo, _ = df[df["default"] == 1].shape
print(
f"A proporcão adimplentes ativos é de {round(100 * qtd_adimplentes / qtd_total, 2)}%"
)
print(
f"A nova proporcão de clientes adimplentes é de {round(100 * qtd_adimplentes_novo / qtd_total_novo, 2)}%"
)
print("")
print(
f"A proporcão clientes inadimplentes é de {round(100 * qtd_inadimplentes / qtd_total, 2)}%"
)
print(
f"A nova proporcão de clientes inadimplentes é de {round(100 * qtd_inadimplentes_novo / qtd_total_novo, 2)}%"
)
# Podemos notar com esta informação que apesar das linhas excluídas a proporção se manteve.
# ## 3\. Representação dos Dados
# Agora, para uma melhor interpretação, vamos represntar todos esses dados em forma de gráficos.
# Vamos observar primeiramente a relação entre o nível de escolaridade total e comparado em relação aos clientes adimplentes e inadimplentes.
sns.set_style("whitegrid")
df_adimplente = df[df["default"] == 0]
df_inadimplente = df[df["default"] == 1]
coluna = "escolaridade"
titulos = [
"Escolaridade dos Clientes",
"Escolaridade dos Clientes Adimplentes",
"Escolaridade dos Clientes Inadimplentes",
]
eixo = 0
max_y = 0
max = df.select_dtypes("object").describe()[coluna]["freq"] * 1.1
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
df_to_plot = dataframe[coluna].value_counts().to_frame()
df_to_plot.rename(columns={coluna: "frequencia_absoluta"}, inplace=True)
df_to_plot[coluna] = df_to_plot.index
df_to_plot.sort_values(by=[coluna], inplace=True)
df_to_plot.sort_values(by=[coluna])
f = sns.barplot(
x=df_to_plot[coluna], y=df_to_plot["frequencia_absoluta"], ax=eixos[eixo]
)
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
f.set_xticklabels(labels=f.get_xticklabels(), rotation=90)
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# Agora um gráfico em relaçao aos salários dos clientes.
coluna = "salario_anual"
titulos = [
"Salário Anual dos Clientes",
"Salário Anual dos Clientes Adimplentes",
"Salário Anual dos Clientes Inadimplentes",
]
eixo = 0
max_y = 0
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
df_to_plot = dataframe[coluna].value_counts().to_frame()
df_to_plot.rename(columns={coluna: "frequencia_absoluta"}, inplace=True)
df_to_plot[coluna] = df_to_plot.index
df_to_plot.reset_index(inplace=True, drop=True)
df_to_plot.sort_values(by=[coluna], inplace=True)
f = sns.barplot(
x=df_to_plot[coluna], y=df_to_plot["frequencia_absoluta"], ax=eixos[eixo]
)
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
f.set_xticklabels(labels=f.get_xticklabels(), rotation=90)
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# Agora vamos visualizar um gráfico do número de transações em relação ao cliente ser adimplente ou inadimplente.
coluna = "qtd_transacoes_12m"
titulos = [
"Qtd. de Transações no Último Ano",
"Qtd. de Transações no Último Ano de Adimplentes",
"Qtd. de Transações no Último Ano de Inadimplentes",
]
eixo = 0
max_y = 0
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
f = sns.histplot(x=coluna, data=dataframe, stat="count", ax=eixos[eixo])
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# Agora vamos visualizar um gráfico em relação ao valor das transações em relação ao cliente adimplente e inadimplente.
coluna = "valor_transacoes_12m"
titulos = [
"Valor das Transações no Último Ano",
"Valor das Transações no Último Ano de Adimplentes",
"Valor das Transações no Último Ano de Inadimplentes",
]
eixo = 0
max_y = 0
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
f = sns.histplot(x=coluna, data=dataframe, stat="count", ax=eixos[eixo])
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# E o último gráfico que se refere a quantidade de transações em relação ao vlaor das transações.
f = sns.relplot(
x="valor_transacoes_12m", y="qtd_transacoes_12m", data=df, hue="default"
)
_ = f.set(
title="Relação entre Valor e Quantidade de Transações no Último Ano",
xlabel="Valor das Transações no Último Ano",
ylabel="Quantidade das Transações no Último Ano",
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/926/129926915.ipynb
| null | null |
[{"Id": 129926915, "ScriptId": 38647851, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14840733, "CreationDate": "05/17/2023 13:29:45", "VersionNumber": 1.0, "Title": "Projeto Final | Curso de Python", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 259.0, "LinesInsertedFromPrevious": 259.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Projeto Final
# - Curso EBAC
# ## 1\. Apresentação
# A intenção deste projeto é fazer a exploração, manipulação, limpeza e visualização de dados disponibilizados no curso da EBAC, para desta forma, testar nossos conhecimentos e identificar nossa aptidez com a ferramenta google colab, além de nossa habilidade com a linguagem python exercida até o momento.
# ### 1.1. Descrição
# Os dados que serão submetidos a análise estão neste [link](https://raw.githubusercontent.com/andre-marcos-perez/ebac-course-utils/develop/dataset/credito.csv), este arquivo está em formato csv e nele sse encontra informações de cliente de uma instituição financeira, como salário, sexo, idade, tipo do cartão, entre outros. Cabe ao objetivo deste projeto fazer a análise destes dados e saber o por quê de um cliente não pagar suas dívidas baseando neste estudo. Este dado será na forma de 0 ou 1 na coluna default, sendo que, 0 = adimplente ou 1= inadimplente.
# Podemos ver que seria inacessível analisarmos este dado como em sua forma orginal devido sua quantidade de colunas e linhas, por isso, iremos deixá-lo o mais simples possível para uma análise mais efetiva.
# ```
# -> As bibliotecas usadas são:
# pandas==1.5.3
# seaborn==0.12.2
# matplotlib==3.7.1
# ```
# ## 2\. Exploração de Dados
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv(
"/kaggle/input/adimplente-inadimplente/Python_M10_support material (4).csv",
na_values="na",
)
df.head(n=5)
# Para termos uma noção da quantidade de dados que estamos lidando usamos o código:
#
df.shape
# Ou seja, estamos lidando com 10127 linhas e 16 colunas.
# Para filtrarmos e sabermos a proporção em números brutos destes clientes quais são adimplentes e inadimplentes:
df[df["default"] == 0].shape
df[df["default"] == 1].shape
# São 8500 clientes adimplentes e 1627 inadimplentes.
qtd_total, _ = df.shape
qtd_adimplentes, _ = df[df["default"] == 0].shape
qtd_inadimplentes, _ = df[df["default"] == 1].shape
print(
f"A proporcão clientes adimplentes é de {round(100 * qtd_adimplentes / qtd_total, 2)}%"
)
print(
f"A proporcão clientes inadimplentes é de {round(100 * qtd_inadimplentes / qtd_total, 2)}%"
)
df.dtypes
# Para sabermos se esão faltando dados usaremos o seguinte código:
df.select_dtypes("object").describe().transpose()
# Podemos observar na coluna acima 2 problemas:
# 1- Que os itens escolaridade, estado civil e salário anual não estão coerentes com a quantidade total de linhas, ou seja, possuímos neste dataframe alguns dados faltando.
# 2- O fato de que os itens limite de crédito e valor das transações estão se encaixando como formato objeto.
df.drop("id", axis=1).select_dtypes("number").describe().transpose()
# No caso das colunas numéricas podemos observar que não há falta de informmação, pois a quantidade de itens na coluna "count" se alinham com a quantidade de linhas totais.
# Para começarmos a filtrar os dados, precisamos primeiramente de retirar os dados faltantes.
df.isna().any()
# Com estes dados coinfirmamos que no caso da escolaridade, estado civil e
# salario_anual estão faltando informações.
def stats_dados_faltantes(df: pd.DataFrame) -> None:
stats_dados_faltantes = []
for col in df.columns:
if df[col].isna().any():
qtd, _ = df[df[col].isna()].shape
total, _ = df.shape
dict_dados_faltantes = {
col: {"quantidade": qtd, "porcentagem": round(100 * qtd / total, 2)}
}
stats_dados_faltantes.append(dict_dados_faltantes)
for stat in stats_dados_faltantes:
print(stat)
stats_dados_faltantes(df=df)
stats_dados_faltantes(df=df[df["default"] == 0])
stats_dados_faltantes(df=df[df["default"] == 1])
# Por meio deste último dois códigos podemos obsservar que apesar da nossa base de dados terem mais pessoas adimplentes que inadimplentes, os dados faltantes são relativamente proporcionais a ambas categorias, isso nos deixa excluir linhas mais tranquilamente.
# ### 2.1. Correção de valores
# Vamos corrigir o problema dos dados limite de credito e valor das transações se encaixarem como objeto e não como números.
# Isso se dá pelo fato do python não identificar vírugulas como um separador de partes inteiras e decimais além de não identificar como separador de casa de milhares para centenas.
fn = lambda valor: float(valor.replace(".", "").replace(",", "."))
df["valor_transacoes_12m"] = df["valor_transacoes_12m"].apply(fn)
df["limite_credito"] = df["limite_credito"].apply(fn)
# Para termos certeza se a função alcançou todos os dados e fez a conversão corretamente vamos ver os tipos de dados presente de novo.
df.dtypes
# ### 2.2. Limpeza de Dados
df = df.dropna()
# Neste código acima nós retiramos todos os dados vazios.
df.shape
qtd_total_novo, _ = df.shape
qtd_adimplentes_novo, _ = df[df["default"] == 0].shape
qtd_inadimplentes_novo, _ = df[df["default"] == 1].shape
print(
f"A proporcão adimplentes ativos é de {round(100 * qtd_adimplentes / qtd_total, 2)}%"
)
print(
f"A nova proporcão de clientes adimplentes é de {round(100 * qtd_adimplentes_novo / qtd_total_novo, 2)}%"
)
print("")
print(
f"A proporcão clientes inadimplentes é de {round(100 * qtd_inadimplentes / qtd_total, 2)}%"
)
print(
f"A nova proporcão de clientes inadimplentes é de {round(100 * qtd_inadimplentes_novo / qtd_total_novo, 2)}%"
)
# Podemos notar com esta informação que apesar das linhas excluídas a proporção se manteve.
# ## 3\. Representação dos Dados
# Agora, para uma melhor interpretação, vamos represntar todos esses dados em forma de gráficos.
# Vamos observar primeiramente a relação entre o nível de escolaridade total e comparado em relação aos clientes adimplentes e inadimplentes.
sns.set_style("whitegrid")
df_adimplente = df[df["default"] == 0]
df_inadimplente = df[df["default"] == 1]
coluna = "escolaridade"
titulos = [
"Escolaridade dos Clientes",
"Escolaridade dos Clientes Adimplentes",
"Escolaridade dos Clientes Inadimplentes",
]
eixo = 0
max_y = 0
max = df.select_dtypes("object").describe()[coluna]["freq"] * 1.1
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
df_to_plot = dataframe[coluna].value_counts().to_frame()
df_to_plot.rename(columns={coluna: "frequencia_absoluta"}, inplace=True)
df_to_plot[coluna] = df_to_plot.index
df_to_plot.sort_values(by=[coluna], inplace=True)
df_to_plot.sort_values(by=[coluna])
f = sns.barplot(
x=df_to_plot[coluna], y=df_to_plot["frequencia_absoluta"], ax=eixos[eixo]
)
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
f.set_xticklabels(labels=f.get_xticklabels(), rotation=90)
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# Agora um gráfico em relaçao aos salários dos clientes.
coluna = "salario_anual"
titulos = [
"Salário Anual dos Clientes",
"Salário Anual dos Clientes Adimplentes",
"Salário Anual dos Clientes Inadimplentes",
]
eixo = 0
max_y = 0
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
df_to_plot = dataframe[coluna].value_counts().to_frame()
df_to_plot.rename(columns={coluna: "frequencia_absoluta"}, inplace=True)
df_to_plot[coluna] = df_to_plot.index
df_to_plot.reset_index(inplace=True, drop=True)
df_to_plot.sort_values(by=[coluna], inplace=True)
f = sns.barplot(
x=df_to_plot[coluna], y=df_to_plot["frequencia_absoluta"], ax=eixos[eixo]
)
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
f.set_xticklabels(labels=f.get_xticklabels(), rotation=90)
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# Agora vamos visualizar um gráfico do número de transações em relação ao cliente ser adimplente ou inadimplente.
coluna = "qtd_transacoes_12m"
titulos = [
"Qtd. de Transações no Último Ano",
"Qtd. de Transações no Último Ano de Adimplentes",
"Qtd. de Transações no Último Ano de Inadimplentes",
]
eixo = 0
max_y = 0
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
f = sns.histplot(x=coluna, data=dataframe, stat="count", ax=eixos[eixo])
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# Agora vamos visualizar um gráfico em relação ao valor das transações em relação ao cliente adimplente e inadimplente.
coluna = "valor_transacoes_12m"
titulos = [
"Valor das Transações no Último Ano",
"Valor das Transações no Último Ano de Adimplentes",
"Valor das Transações no Último Ano de Inadimplentes",
]
eixo = 0
max_y = 0
figura, eixos = plt.subplots(1, 3, figsize=(20, 5), sharex=True)
for dataframe in [df, df_adimplente, df_inadimplente]:
f = sns.histplot(x=coluna, data=dataframe, stat="count", ax=eixos[eixo])
f.set(title=titulos[eixo], xlabel=coluna.capitalize(), ylabel="Frequência Absoluta")
_, max_y_f = f.get_ylim()
max_y = max_y_f if max_y_f > max_y else max_y
f.set(ylim=(0, max_y))
eixo += 1
figura.show()
# E o último gráfico que se refere a quantidade de transações em relação ao vlaor das transações.
f = sns.relplot(
x="valor_transacoes_12m", y="qtd_transacoes_12m", data=df, hue="default"
)
_ = f.set(
title="Relação entre Valor e Quantidade de Transações no Último Ano",
xlabel="Valor das Transações no Último Ano",
ylabel="Quantidade das Transações no Último Ano",
)
| false | 0 | 3,585 | 0 | 3,585 | 3,585 |
||
129926128
|
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import svm
from sklearn.svm import SVC
from sklearn.metrics import jaccard_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, accuracy_score
import sklearn.metrics as metrics
df = pd.read_csv(
"https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillUp/labs/ML-FinalAssignment/Weather_Data.csv"
)
df.head()
df.info()
df_sydney_processed = pd.get_dummies(
data=df, columns=["RainToday", "WindGustDir", "WindDir9am", "WindDir3pm"]
)
df_sydney_processed.replace(["No", "Yes"], [0, 1], inplace=True)
df_sydney_processed.drop("Date", axis=1, inplace=True)
df_sydney_processed = df_sydney_processed.astype(float)
# #1.Splitting the dataset into training and testing data for regression
features = df_sydney_processed.drop(columns="RainTomorrow", axis=1)
Y = df_sydney_processed["RainTomorrow"]
x_train, x_test, y_train, y_test = train_test_split(
features, Y, test_size=0.2, random_state=10
)
# #1.Linear Regression
# 2.Building and training a model using Linear Regression and calculating evaluation metrics
LinearReg = LinearRegression()
LinearReg.fit(x_train, y_train)
LinearReg.score(x_test, y_test)
predictions = LinearReg.predict(x_test)
LinearRegression_MAE = metrics.mean_absolute_error(y_test, predictions)
LinearRegression_MSE = metrics.mean_squared_error(y_test, predictions)
LinearRegression_R2 = metrics.r2_score(y_test, predictions)
print(
f"LinearRegression_MAE is :{LinearRegression_MAE},LinearRegression_MSE is :{LinearRegression_MSE},LinearRegression_R2 is :{LinearRegression_R2}"
)
print(
"LinearRegression_MAE is :{},LinearRegression_MSE is :{},LinearRegression_R2 is :{}".format(
LinearRegression_MAE, LinearRegression_MSE, LinearRegression_R2
)
)
# #2.KNN
k = 4
KNN = KNeighborsClassifier(n_neighbors=k).fit(x_train, y_train)
KNN
predictions = KNN.predict(x_test)
KNN_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
KNN_JaccardIndex = metrics.jaccard_score(y_test, predictions)
KNN_F1_Score = metrics.f1_score(y_test, predictions)
print(
"KNN_Accuracy_Score is :{},KNN_JaccardIndex is :{},KNN_F1_Score is :{}".format(
KNN_Accuracy_Score, KNN_JaccardIndex, KNN_F1_Score
)
)
# #3.Decison Tree
Tree = DecisionTreeClassifier(criterion="entropy", max_depth=4)
Tree
Tree.fit(x_train, y_train)
predictions = Tree.predict(x_test)
Tree_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
Tree_JaccardIndex = metrics.jaccard_score(y_test, predictions)
Tree_F1_Score = metrics.f1_score(y_test, predictions)
print(
"Tree_Accuracy_Score is :{},Tree_JaccardIndex is :{},Tree_F1_Score is :{}".format(
Tree_Accuracy_Score, Tree_JaccardIndex, Tree_F1_Score
)
)
# #4.Logistic Regression
x_train, x_test, y_train, y_test = train_test_split(
features, Y, test_size=0.2, random_state=1
)
LR = LogisticRegression(solver="liblinear")
LR.fit(x_train, y_train)
predictions = LR.predict(x_test)
LR_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
LR_JaccardIndex = metrics.jaccard_score(y_test, predictions)
LR_F1_Score = metrics.f1_score(y_test, predictions)
LR_Log_Loss = metrics.log_loss(y_test, predictions)
print(
"LR_Accuracy_Score is :{},LR_JaccardIndex is :{},LR_F1_Score is :{},LR_Log_Loss is :{}".format(
LR_Accuracy_Score, LR_JaccardIndex, LR_F1_Score, LR_Log_Loss
)
)
# #5.SVM
SVM = SVC(kernel="linear", random_state=0)
SVM.fit(x_train, y_train)
predictions = SVM.predict(x_test)
SVM_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
SVM_JaccardIndex = metrics.jaccard_score(y_test, predictions)
SVM_F1_Score = metrics.f1_score(y_test, predictions)
print(
"SVM_Accuracy_Score is :{},SVM_JaccardIndex is :{},SVM_F1_Score is :{}".format(
SVM_Accuracy_Score, SVM_JaccardIndex, SVM_F1_Score
)
)
cm = confusion_matrix(y_test, predictions)
print(cm)
from tabulate import tabulate
d = {
"KNN": [KNN_Accuracy_Score, KNN_JaccardIndex, KNN_F1_Score, "-"],
"Tree": [Tree_Accuracy_Score, Tree_JaccardIndex, Tree_F1_Score, "-"],
"LR": [LR_Accuracy_Score, LR_JaccardIndex, LR_F1_Score, LR_Log_Loss],
"SVM": [SVM_Accuracy_Score, SVM_JaccardIndex, SVM_F1_Score, "-"],
}
Report = pd.DataFrame(
data=d, index=["Accuracy", "Jaccard Index", "F1-Score", "Log Loss"]
).T
print(tabulate(Report, headers="keys", tablefmt="psql"))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/926/129926128.ipynb
| null | null |
[{"Id": 129926128, "ScriptId": 38648061, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13993579, "CreationDate": "05/17/2023 13:24:03", "VersionNumber": 1.0, "Title": "Sydney_Weather_ML", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 129.0, "LinesInsertedFromPrevious": 129.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
| null | null | null | null |
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import svm
from sklearn.svm import SVC
from sklearn.metrics import jaccard_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, accuracy_score
import sklearn.metrics as metrics
df = pd.read_csv(
"https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillUp/labs/ML-FinalAssignment/Weather_Data.csv"
)
df.head()
df.info()
df_sydney_processed = pd.get_dummies(
data=df, columns=["RainToday", "WindGustDir", "WindDir9am", "WindDir3pm"]
)
df_sydney_processed.replace(["No", "Yes"], [0, 1], inplace=True)
df_sydney_processed.drop("Date", axis=1, inplace=True)
df_sydney_processed = df_sydney_processed.astype(float)
# #1.Splitting the dataset into training and testing data for regression
features = df_sydney_processed.drop(columns="RainTomorrow", axis=1)
Y = df_sydney_processed["RainTomorrow"]
x_train, x_test, y_train, y_test = train_test_split(
features, Y, test_size=0.2, random_state=10
)
# #1.Linear Regression
# 2.Building and training a model using Linear Regression and calculating evaluation metrics
LinearReg = LinearRegression()
LinearReg.fit(x_train, y_train)
LinearReg.score(x_test, y_test)
predictions = LinearReg.predict(x_test)
LinearRegression_MAE = metrics.mean_absolute_error(y_test, predictions)
LinearRegression_MSE = metrics.mean_squared_error(y_test, predictions)
LinearRegression_R2 = metrics.r2_score(y_test, predictions)
print(
f"LinearRegression_MAE is :{LinearRegression_MAE},LinearRegression_MSE is :{LinearRegression_MSE},LinearRegression_R2 is :{LinearRegression_R2}"
)
print(
"LinearRegression_MAE is :{},LinearRegression_MSE is :{},LinearRegression_R2 is :{}".format(
LinearRegression_MAE, LinearRegression_MSE, LinearRegression_R2
)
)
# #2.KNN
k = 4
KNN = KNeighborsClassifier(n_neighbors=k).fit(x_train, y_train)
KNN
predictions = KNN.predict(x_test)
KNN_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
KNN_JaccardIndex = metrics.jaccard_score(y_test, predictions)
KNN_F1_Score = metrics.f1_score(y_test, predictions)
print(
"KNN_Accuracy_Score is :{},KNN_JaccardIndex is :{},KNN_F1_Score is :{}".format(
KNN_Accuracy_Score, KNN_JaccardIndex, KNN_F1_Score
)
)
# #3.Decison Tree
Tree = DecisionTreeClassifier(criterion="entropy", max_depth=4)
Tree
Tree.fit(x_train, y_train)
predictions = Tree.predict(x_test)
Tree_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
Tree_JaccardIndex = metrics.jaccard_score(y_test, predictions)
Tree_F1_Score = metrics.f1_score(y_test, predictions)
print(
"Tree_Accuracy_Score is :{},Tree_JaccardIndex is :{},Tree_F1_Score is :{}".format(
Tree_Accuracy_Score, Tree_JaccardIndex, Tree_F1_Score
)
)
# #4.Logistic Regression
x_train, x_test, y_train, y_test = train_test_split(
features, Y, test_size=0.2, random_state=1
)
LR = LogisticRegression(solver="liblinear")
LR.fit(x_train, y_train)
predictions = LR.predict(x_test)
LR_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
LR_JaccardIndex = metrics.jaccard_score(y_test, predictions)
LR_F1_Score = metrics.f1_score(y_test, predictions)
LR_Log_Loss = metrics.log_loss(y_test, predictions)
print(
"LR_Accuracy_Score is :{},LR_JaccardIndex is :{},LR_F1_Score is :{},LR_Log_Loss is :{}".format(
LR_Accuracy_Score, LR_JaccardIndex, LR_F1_Score, LR_Log_Loss
)
)
# #5.SVM
SVM = SVC(kernel="linear", random_state=0)
SVM.fit(x_train, y_train)
predictions = SVM.predict(x_test)
SVM_Accuracy_Score = metrics.accuracy_score(y_test, predictions)
SVM_JaccardIndex = metrics.jaccard_score(y_test, predictions)
SVM_F1_Score = metrics.f1_score(y_test, predictions)
print(
"SVM_Accuracy_Score is :{},SVM_JaccardIndex is :{},SVM_F1_Score is :{}".format(
SVM_Accuracy_Score, SVM_JaccardIndex, SVM_F1_Score
)
)
cm = confusion_matrix(y_test, predictions)
print(cm)
from tabulate import tabulate
d = {
"KNN": [KNN_Accuracy_Score, KNN_JaccardIndex, KNN_F1_Score, "-"],
"Tree": [Tree_Accuracy_Score, Tree_JaccardIndex, Tree_F1_Score, "-"],
"LR": [LR_Accuracy_Score, LR_JaccardIndex, LR_F1_Score, LR_Log_Loss],
"SVM": [SVM_Accuracy_Score, SVM_JaccardIndex, SVM_F1_Score, "-"],
}
Report = pd.DataFrame(
data=d, index=["Accuracy", "Jaccard Index", "F1-Score", "Log Loss"]
).T
print(tabulate(Report, headers="keys", tablefmt="psql"))
| false | 0 | 1,655 | 2 | 1,655 | 1,655 |
||
129926623
|
<jupyter_start><jupyter_text>Tutorial2_data
Kaggle dataset identifier: tutorial2-data
<jupyter_script># Part1. Extract the city part ourside the AOI and export it as 'outside.shp'
import geopandas as gpd
import matplotlib.pyplot as plt
# 导入数据
cities = gpd.read_file("../input/tutorial2-data/belgian_cities.shp")
AOI = gpd.read_file("../input/tutorial2-data/area_of_interest_.shp")
# aoi之外的城市
cities_out_AOI = gpd.overlay(cities, AOI, how="difference")
cities_out_AOI.plot(figsize=(10, 10), cmap="winter", column="NAME_4")
# 保存文件
cities_out_AOI.to_file("./outside.shp.shp")
# Part2. Extract the centroids of each district and make a buffer of 30 meters for them
# 提取每个行政区划的中心点
centroids = cities.centroid
centroids.plot()
# 创建缓冲区
buffered_centroids = centroids.buffer(3000)
# 将缓冲区存储到新的列中
cities["buffered_centroids"] = buffered_centroids
# buffer 为啥不显示啊,可能是投影的问题,buffer范围调成3000好一点
ax = cities.plot(color="white", edgecolor="black")
buffered_centroids.plot(ax=ax, color="red", alpha=0.5)
# Part3. export the buffers as 'centroid_buffer.shp'
buffered_centroids.to_file("./buffered_centroids.shp")
# Part4. add your mapbox tile in the folium map ** (optional)**
import folium
# 创建地图并指定地图中心和缩放级别
m = folium.Map(location=[22.352, 113.584], zoom_start=10)
# 添加Mapbox tile图层
folium.TileLayer(
tiles="https://api.mapbox.com/styles/v1/lewdsama/clhhmmtyq01dg01qu8mrihaim.html?title=copy&access_token=pk.eyJ1IjoibGV3ZHNhbWEiLCJhIjoiY2xoZzg2OXVnMDF5NzNocXlzMXdvdnFvaCJ9.XNSK15Gtm0bgWy7BeDyhfg&zoomwheel=true&fresh=true#14.96/22.35194/113.58342",
name="My Custom Tile",
attr="My attribution",
).add_to(m)
# 将图例添加到地图中
folium.LayerControl().add_to(m)
# 显示地图
m
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/926/129926623.ipynb
|
tutorial2-data
|
kyrenchen
|
[{"Id": 129926623, "ScriptId": 38642231, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15147144, "CreationDate": "05/17/2023 13:27:45", "VersionNumber": 1.0, "Title": "YinZicheng_homework 2", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 64.0, "LinesInsertedFromPrevious": 64.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186349928, "KernelVersionId": 129926623, "SourceDatasetVersionId": 3529467}]
|
[{"Id": 3529467, "DatasetId": 2123013, "DatasourceVersionId": 3582279, "CreatorUserId": 3948686, "LicenseName": "Unknown", "CreationDate": "04/26/2022 04:38:00", "VersionNumber": 2.0, "Title": "Tutorial2_data", "Slug": "tutorial2-data", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2022/04/26", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2123013, "CreatorUserId": 3948686, "OwnerUserId": 3948686.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3529467.0, "CurrentDatasourceVersionId": 3582279.0, "ForumId": 2148571, "Type": 2, "CreationDate": "04/26/2022 03:37:14", "LastActivityDate": "04/26/2022", "TotalViews": 184, "TotalDownloads": 35, "TotalVotes": 0, "TotalKernels": 19}]
|
[{"Id": 3948686, "UserName": "kyrenchen", "DisplayName": "Kyren Chen", "RegisterDate": "10/30/2019", "PerformanceTier": 0}]
|
# Part1. Extract the city part ourside the AOI and export it as 'outside.shp'
import geopandas as gpd
import matplotlib.pyplot as plt
# 导入数据
cities = gpd.read_file("../input/tutorial2-data/belgian_cities.shp")
AOI = gpd.read_file("../input/tutorial2-data/area_of_interest_.shp")
# aoi之外的城市
cities_out_AOI = gpd.overlay(cities, AOI, how="difference")
cities_out_AOI.plot(figsize=(10, 10), cmap="winter", column="NAME_4")
# 保存文件
cities_out_AOI.to_file("./outside.shp.shp")
# Part2. Extract the centroids of each district and make a buffer of 30 meters for them
# 提取每个行政区划的中心点
centroids = cities.centroid
centroids.plot()
# 创建缓冲区
buffered_centroids = centroids.buffer(3000)
# 将缓冲区存储到新的列中
cities["buffered_centroids"] = buffered_centroids
# buffer 为啥不显示啊,可能是投影的问题,buffer范围调成3000好一点
ax = cities.plot(color="white", edgecolor="black")
buffered_centroids.plot(ax=ax, color="red", alpha=0.5)
# Part3. export the buffers as 'centroid_buffer.shp'
buffered_centroids.to_file("./buffered_centroids.shp")
# Part4. add your mapbox tile in the folium map ** (optional)**
import folium
# 创建地图并指定地图中心和缩放级别
m = folium.Map(location=[22.352, 113.584], zoom_start=10)
# 添加Mapbox tile图层
folium.TileLayer(
tiles="https://api.mapbox.com/styles/v1/lewdsama/clhhmmtyq01dg01qu8mrihaim.html?title=copy&access_token=pk.eyJ1IjoibGV3ZHNhbWEiLCJhIjoiY2xoZzg2OXVnMDF5NzNocXlzMXdvdnFvaCJ9.XNSK15Gtm0bgWy7BeDyhfg&zoomwheel=true&fresh=true#14.96/22.35194/113.58342",
name="My Custom Tile",
attr="My attribution",
).add_to(m)
# 将图例添加到地图中
folium.LayerControl().add_to(m)
# 显示地图
m
| false | 0 | 652 | 0 | 673 | 652 |
||
129406453
|
<jupyter_start><jupyter_text>Saudi Arabia Real Estate (AQAR)
### Context
The goal of this statistical analysis is to help us understand the relationship between house features and how these variables are used to predict the house price.
The chosen cities are Riyadh, Jeddah, Dammam, and Al-Khobar
- Riyadh is the capital and largest city in Saudi Arabia, with the largest municipal population in the Middle East. Riyadh has a diverse range of people and cultures, it is still growing day by day.
- Jeddah which located in the middle of the eastern coast of the red sea and is considered the economic and tourism capital of the country.
- Dammam it lies on the Persian Gulf northwest of Bahrain Island and forms a larger metropolitan and industrial complex with Khobar, Qatif, and Dhahran.
- Al-Khobar city is one of the three main cities in the Eastern Province, the others being Dammam and Dhahran. It is developing into an important industrial city, with factories turning out industrial gas, dairy products, carbonated water, tissue paper and ready-made garments.
This dataset will only focused on the rental houses.
### Content
-city: city where house locate in
-district: district where house locate in
-front: What is the house front is north, west .. etc
-size: size in m^2
-property_age: property age for the house
-bedrooms: number of bedrooms
-bathrooms: number of bathrooms
-livingrooms: number of livingrooms
-kitchen: show whether the house have a kitchen or not
-garage: show whether the house have a garage or not
-driver_room: show whether the house have a driver_room or not
-maid_room: show whether the house have a maid_room or not
-furnished: show whether the house is furnished or not
-ac: show whether the house have a ac or not
-roof: show whether the house have a space for roof on top or not
-pool: show whether the house have a pool or not
-frontyard: show whether the house have a frontyard or not
-basement: show whether the house have a basement or not
-duplex: show whether the house is a duplex or not
-stairs: show whether the house have a stairs or not
-elevator: show whether the house have an elevator or not
-fireplace: show whether the house have a fireplace or not
-price: show the price of the house
-details: shows any additional details from the house owner about the house
### Aims
This dataset aims to help analyzing the real estate of those cities to investigate the relationships of prices with other features. The dataset is collected and scrapped from [Aqar website](https://sa.aqar.fm).
Kaggle dataset identifier: saudi-arabia-real-estate-aqar
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import pandas as pd
df = pd.read_csv("/kaggle/input/saudi-arabia-real-estate-aqar/SA_Aqar.csv")
df.head(2)
df.shape
print(df.isnull().sum())
df.drop("details", axis=1, inplace=True)
df.duplicated().sum()
df.drop_duplicates(inplace=True)
import matplotlib.pyplot as plt
import seaborn as sns
sns.histplot(df.price)
plt.show()
sns.boxplot(data=df, y=df.price)
plt.show()
target = df.price.values
import numpy as np
logged_target = np.log(target)
sns.boxplot(logged_target)
plt.show()
# df[['city']].apply(lambda x: x.astype('category'))
# df=df.drop(["city", "district", "front","front"],axis=1)
df.dtypes
# sns.pairplot(data=df)
# plt.show()
num_features = df.select_dtypes("number").reset_index(drop=True)
text_features = df.select_dtypes("object").reset_index(drop=True)
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse_output=False)
ohe.fit(text_features)
ohe_data = ohe.transform(text_features)
ohe_data = pd.DataFrame(ohe_data, columns=ohe.get_feature_names_out())
ohe_data.head()
full_data = pd.concat([ohe_data, num_features], axis=1)
full_data.head()
features = full_data.drop("price", axis=1)
target = full_data.price
logged_taget = np.log(target)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
features, logged_taget, test_size=0.2, random_state=42
)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
mae = mean_absolute_error(y_test, model.predict(X_test))
print("mae= " + str(mae))
msr = mean_squared_error(y_test, model.predict(X_test))
print("msr= " + str(msr))
r2_score(y_test, model.predict(X_test))
r2score = r2_score(y_test, model.predict(X_test))
print("r2score= " + str(r2score))
comp = np.column_stack((y_test, model.predict(X_test)))
comp[:4, :]
import seaborn as sns
sns.regplot(x=comp[:, 0], y=comp[:, 1])
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import *
dt = DecisionTreeRegressor()
dt.fit(X_train, y_train)
pre = dt.predict(X_test)
mae = mean_absolute_error(y_test, pre)
mse = mean_squared_error(y_test, pre)
r2s = r2_score(y_test, pre)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
comp = np.column_stack((y_test, model.predict(X_test)))
comp[:3, :]
import seaborn as sns
sns.regplot(x=comp[:, 0], y=comp[:, 1])
# 1-Scaling
# 2-svd
# model
# ## Pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.decomposition import TruncatedSVD
from sklearn.model_selection import GridSearchCV
reg = make_pipeline(
RobustScaler(),
TruncatedSVD(n_components=2),
DecisionTreeRegressor(max_depth=5, random_state=42),
)
reg.fit(X_train, y_train)
predictions = reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Decision Tree Tuning
# #### 1- max depth
# #### 2- min_samples_leaf
# #### 3- min_samples_split
reg = Pipeline(
steps=[
("scaler", RobustScaler()),
("tsvd", TruncatedSVD()),
("model", DecisionTreeRegressor(random_state=42)),
]
)
reg.fit(X_train, y_train)
predictions = reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
params = {
"model__max_depth": [2, 3, 4, 5, 8, 10, 15, 20],
"model__min_samples_split": [5, 10, 15, 20],
"model__min_samples_leaf": [2, 3, 4, 5, 6, 7, 8, 9, 10],
}
dt_grid_model = GridSearchCV(estimator=reg, param_grid=params, n_jobs=-1)
dt_grid_model.fit(X_train, y_train)
dt_grid_model.best_params_
predictions = dt_grid_model.best_estimator_.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Save Model
import pickle
with open("pipe_dt_model.pkl", "wb") as file:
f = pickle.dump(dt_grid_model.best_estimator_, file)
model = pickle.load(open("/kaggle/working/pipe_dt_model.pkl", "rb"))
model.predict(X_test)
# ## Random Forest regression
# #### Vanilla Model
#
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
X_train.shape
predictions = rf.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
sns.set()
sns.regplot(x=y_test, y=predictions, line_kws={"color": "red"})
plt.show()
# ## Random Forest with pipeline
from sklearn.preprocessing import StandardScaler
rf_reg = Pipeline(
steps=[
("scaler", StandardScaler()),
("tsvd", TruncatedSVD()),
(
"model",
RandomForestRegressor(n_estimators=100, max_features=None, random_state=42),
),
]
)
rf_reg.fit(X_train, y_train)
predictions = rf_reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
sns.regplot(x=y_test, y=predictions, line_kws={"color": "red"})
plt.show()
from sklearn.model_selection import RandomizedSearchCV
params = {
"tsvd__n_components": [2, 3, 4, 5, 10],
"model__n_estimators": [100, 120, 150, 200],
"model__max_depth": [2, 3, 4, 5, 8, 10, 15, 20],
"model__min_samples_split": [5, 10, 15, 20],
"model__min_samples_leaf": [2, 3, 4, 5, 6, 7, 8, 9, 10],
}
rf_grid_model = RandomizedSearchCV(rf_reg, params, n_iter=30, n_jobs=-1)
rf_grid_model.fit(X_train, y_train)
rf_grid_model.best_params_
predictions = rf_grid_model.best_estimator_.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
from sklearn.ensemble import BaggingRegressor
bag_reg = BaggingRegressor(
estimator=DecisionTreeRegressor(),
n_estimators=200,
max_samples=0.6,
max_features=0.8,
bootstrap_features=False,
n_jobs=-1,
)
bag_reg.fit(X_train, y_train)
predictions = bag_reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Ada boost
from sklearn.ensemble import AdaBoostRegressor
ada = AdaBoostRegressor(n_estimators=50, learning_rate=0.01)
ada.fit(X_train, y_train)
predictions = ada.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
ada.estimator_errors_.shape
# ## Tuning Ada boost
params = {
"learning_rate": np.linspace(0.001, 1, 15),
"n_estimators": [50, 100, 150, 200, 250],
}
ada_grid_model = GridSearchCV(ada, param_grid=params, n_jobs=-1)
ada_grid_model.fit(X_train, y_train)
ada_grid_model.best_params_
predictions = ada_grid_model.best_estimator_.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Gradient Boosting Regression
from sklearn.ensemble import GradientBoostingRegressor
gbr = GradientBoostingRegressor(random_state=42)
gbr.fit(X_train, y_train)
predictions = gbr.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Tune GBR
params = dict(
max_depth=[1, 2, 3, 4],
n_estimators=[100, 200, 300],
learning_rate=np.linspace(0.01, 1, 15),
min_samples_split=[5, 7, 10, 15, 20, 25],
subsample=[0.3, 0.5, 0.7, 1],
min_samples_leaf=[2, 3, 4, 7, 10, 15],
)
gbr_grid_model = GridSearchCV(gbr, param_grid=params, n_jobs=-1)
gbr_grid_model.fit(X_train, y_train)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/406/129406453.ipynb
|
saudi-arabia-real-estate-aqar
|
lama122
|
[{"Id": 129406453, "ScriptId": 38233181, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7938977, "CreationDate": "05/13/2023 14:12:44", "VersionNumber": 3.0, "Title": "Riyadh-house-price", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 335.0, "LinesInsertedFromPrevious": 124.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 211.0, "LinesInsertedFromFork": 237.0, "LinesDeletedFromFork": 93.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 98.0, "TotalVotes": 0}]
|
[{"Id": 185425899, "KernelVersionId": 129406453, "SourceDatasetVersionId": 1888100}]
|
[{"Id": 1888100, "DatasetId": 1124657, "DatasourceVersionId": 1926298, "CreatorUserId": 3851174, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "01/28/2021 17:41:44", "VersionNumber": 4.0, "Title": "Saudi Arabia Real Estate (AQAR)", "Slug": "saudi-arabia-real-estate-aqar", "Subtitle": "Rental house dataset for Riyadh, Jeddah, Dammam, and Alkhobar", "Description": "### Context\n\nThe goal of this statistical analysis is to help us understand the relationship between house features and how these variables are used to predict the house price.\n\nThe chosen cities are Riyadh, Jeddah, Dammam, and Al-Khobar\n\n- Riyadh is the capital and largest city in Saudi Arabia, with the largest municipal population in the Middle East. Riyadh has a diverse range of people and cultures, it is still growing day by day. \n\n- Jeddah which located in the middle of the eastern coast of the red sea and is considered the economic and tourism capital of the country.\n\n- Dammam it lies on the Persian Gulf northwest of Bahrain Island and forms a larger metropolitan and industrial complex with Khobar, Qatif, and Dhahran.\n\n- Al-Khobar city is one of the three main cities in the Eastern Province, the others being Dammam and Dhahran. It is developing into an important industrial city, with factories turning out industrial gas, dairy products, carbonated water, tissue paper and ready-made garments. \n\n\nThis dataset will only focused on the rental houses.\n\n### Content\n-city: city where house locate in\n-district: district where house locate in\n-front: What is the house front is north, west .. etc\n-size: size in m^2\n-property_age: property age for the house\n-bedrooms: number of bedrooms\n-bathrooms: number of bathrooms\n-livingrooms: number of livingrooms\n-kitchen: show whether the house have a kitchen or not\n-garage: show whether the house have a garage or not\n-driver_room: show whether the house have a driver_room or not\n-maid_room: show whether the house have a maid_room or not\n-furnished: show whether the house is furnished or not\n-ac: show whether the house have a ac or not\n-roof: show whether the house have a space for roof on top or not\n-pool: show whether the house have a pool or not\n-frontyard: show whether the house have a frontyard or not\n-basement: show whether the house have a basement or not\n-duplex: show whether the house is a duplex or not\n-stairs: show whether the house have a stairs or not\n-elevator: show whether the house have an elevator or not\n-fireplace: show whether the house have a fireplace or not\n-price: show the price of the house\n-details: shows any additional details from the house owner about the house\n\n\n### Aims\n\nThis dataset aims to help analyzing the real estate of those cities to investigate the relationships of prices with other features. The dataset is collected and scrapped from [Aqar website](https://sa.aqar.fm).", "VersionNotes": "SA real estate (AQAR)", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1124657, "CreatorUserId": 3851174, "OwnerUserId": 3851174.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1888100.0, "CurrentDatasourceVersionId": 1926298.0, "ForumId": 1142042, "Type": 2, "CreationDate": "01/28/2021 16:29:38", "LastActivityDate": "01/28/2021", "TotalViews": 10591, "TotalDownloads": 1105, "TotalVotes": 31, "TotalKernels": 6}]
|
[{"Id": 3851174, "UserName": "lama122", "DisplayName": "Lama Alharbi", "RegisterDate": "10/13/2019", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import pandas as pd
df = pd.read_csv("/kaggle/input/saudi-arabia-real-estate-aqar/SA_Aqar.csv")
df.head(2)
df.shape
print(df.isnull().sum())
df.drop("details", axis=1, inplace=True)
df.duplicated().sum()
df.drop_duplicates(inplace=True)
import matplotlib.pyplot as plt
import seaborn as sns
sns.histplot(df.price)
plt.show()
sns.boxplot(data=df, y=df.price)
plt.show()
target = df.price.values
import numpy as np
logged_target = np.log(target)
sns.boxplot(logged_target)
plt.show()
# df[['city']].apply(lambda x: x.astype('category'))
# df=df.drop(["city", "district", "front","front"],axis=1)
df.dtypes
# sns.pairplot(data=df)
# plt.show()
num_features = df.select_dtypes("number").reset_index(drop=True)
text_features = df.select_dtypes("object").reset_index(drop=True)
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse_output=False)
ohe.fit(text_features)
ohe_data = ohe.transform(text_features)
ohe_data = pd.DataFrame(ohe_data, columns=ohe.get_feature_names_out())
ohe_data.head()
full_data = pd.concat([ohe_data, num_features], axis=1)
full_data.head()
features = full_data.drop("price", axis=1)
target = full_data.price
logged_taget = np.log(target)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
features, logged_taget, test_size=0.2, random_state=42
)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
mae = mean_absolute_error(y_test, model.predict(X_test))
print("mae= " + str(mae))
msr = mean_squared_error(y_test, model.predict(X_test))
print("msr= " + str(msr))
r2_score(y_test, model.predict(X_test))
r2score = r2_score(y_test, model.predict(X_test))
print("r2score= " + str(r2score))
comp = np.column_stack((y_test, model.predict(X_test)))
comp[:4, :]
import seaborn as sns
sns.regplot(x=comp[:, 0], y=comp[:, 1])
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import *
dt = DecisionTreeRegressor()
dt.fit(X_train, y_train)
pre = dt.predict(X_test)
mae = mean_absolute_error(y_test, pre)
mse = mean_squared_error(y_test, pre)
r2s = r2_score(y_test, pre)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
comp = np.column_stack((y_test, model.predict(X_test)))
comp[:3, :]
import seaborn as sns
sns.regplot(x=comp[:, 0], y=comp[:, 1])
# 1-Scaling
# 2-svd
# model
# ## Pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.decomposition import TruncatedSVD
from sklearn.model_selection import GridSearchCV
reg = make_pipeline(
RobustScaler(),
TruncatedSVD(n_components=2),
DecisionTreeRegressor(max_depth=5, random_state=42),
)
reg.fit(X_train, y_train)
predictions = reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Decision Tree Tuning
# #### 1- max depth
# #### 2- min_samples_leaf
# #### 3- min_samples_split
reg = Pipeline(
steps=[
("scaler", RobustScaler()),
("tsvd", TruncatedSVD()),
("model", DecisionTreeRegressor(random_state=42)),
]
)
reg.fit(X_train, y_train)
predictions = reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
params = {
"model__max_depth": [2, 3, 4, 5, 8, 10, 15, 20],
"model__min_samples_split": [5, 10, 15, 20],
"model__min_samples_leaf": [2, 3, 4, 5, 6, 7, 8, 9, 10],
}
dt_grid_model = GridSearchCV(estimator=reg, param_grid=params, n_jobs=-1)
dt_grid_model.fit(X_train, y_train)
dt_grid_model.best_params_
predictions = dt_grid_model.best_estimator_.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Save Model
import pickle
with open("pipe_dt_model.pkl", "wb") as file:
f = pickle.dump(dt_grid_model.best_estimator_, file)
model = pickle.load(open("/kaggle/working/pipe_dt_model.pkl", "rb"))
model.predict(X_test)
# ## Random Forest regression
# #### Vanilla Model
#
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
X_train.shape
predictions = rf.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
sns.set()
sns.regplot(x=y_test, y=predictions, line_kws={"color": "red"})
plt.show()
# ## Random Forest with pipeline
from sklearn.preprocessing import StandardScaler
rf_reg = Pipeline(
steps=[
("scaler", StandardScaler()),
("tsvd", TruncatedSVD()),
(
"model",
RandomForestRegressor(n_estimators=100, max_features=None, random_state=42),
),
]
)
rf_reg.fit(X_train, y_train)
predictions = rf_reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
sns.regplot(x=y_test, y=predictions, line_kws={"color": "red"})
plt.show()
from sklearn.model_selection import RandomizedSearchCV
params = {
"tsvd__n_components": [2, 3, 4, 5, 10],
"model__n_estimators": [100, 120, 150, 200],
"model__max_depth": [2, 3, 4, 5, 8, 10, 15, 20],
"model__min_samples_split": [5, 10, 15, 20],
"model__min_samples_leaf": [2, 3, 4, 5, 6, 7, 8, 9, 10],
}
rf_grid_model = RandomizedSearchCV(rf_reg, params, n_iter=30, n_jobs=-1)
rf_grid_model.fit(X_train, y_train)
rf_grid_model.best_params_
predictions = rf_grid_model.best_estimator_.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
from sklearn.ensemble import BaggingRegressor
bag_reg = BaggingRegressor(
estimator=DecisionTreeRegressor(),
n_estimators=200,
max_samples=0.6,
max_features=0.8,
bootstrap_features=False,
n_jobs=-1,
)
bag_reg.fit(X_train, y_train)
predictions = bag_reg.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Ada boost
from sklearn.ensemble import AdaBoostRegressor
ada = AdaBoostRegressor(n_estimators=50, learning_rate=0.01)
ada.fit(X_train, y_train)
predictions = ada.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
ada.estimator_errors_.shape
# ## Tuning Ada boost
params = {
"learning_rate": np.linspace(0.001, 1, 15),
"n_estimators": [50, 100, 150, 200, 250],
}
ada_grid_model = GridSearchCV(ada, param_grid=params, n_jobs=-1)
ada_grid_model.fit(X_train, y_train)
ada_grid_model.best_params_
predictions = ada_grid_model.best_estimator_.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Gradient Boosting Regression
from sklearn.ensemble import GradientBoostingRegressor
gbr = GradientBoostingRegressor(random_state=42)
gbr.fit(X_train, y_train)
predictions = gbr.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
mse = mean_squared_error(y_test, predictions)
r2s = r2_score(y_test, predictions)
print("mae= ", mae)
print("mse= ", mse)
print("r2s= ", r2s)
# ## Tune GBR
params = dict(
max_depth=[1, 2, 3, 4],
n_estimators=[100, 200, 300],
learning_rate=np.linspace(0.01, 1, 15),
min_samples_split=[5, 7, 10, 15, 20, 25],
subsample=[0.3, 0.5, 0.7, 1],
min_samples_leaf=[2, 3, 4, 7, 10, 15],
)
gbr_grid_model = GridSearchCV(gbr, param_grid=params, n_jobs=-1)
gbr_grid_model.fit(X_train, y_train)
| false | 1 | 3,268 | 0 | 3,966 | 3,268 |
||
129592337
|
# # Setup
import tensorflow as tf
import numpy as np
import IPython.display as display
# # `tf.train.Example`
# ### Data types for `tf.train.Example`
# The following functions can be used to convert a value to a type compatible
# with tf.train.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
print(_bytes_feature(b"test_string"))
print(_bytes_feature("test_bytes".encode("utf-8")))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
feature = _float_feature(np.exp(1))
feature.SerializeToString()
# ### Creating a `tf.train.Example` message
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature.
strings = np.array([b"cat", b"dog", b"chicken", b"horse", b"goat"])
feature2 = strings[feature1]
# Float feature, from a standard normal deviation.
feature3 = np.random.randn(n_observations)
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.train.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.train.Example-compatible
# data type.
feature = {
"feature0": _int64_feature(feature0),
"feature1": _int64_feature(feature1),
"feature2": _bytes_feature(feature2),
"feature3": _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b"goat", 0.9876)
serialized_example
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
# # TFRecords format details
# uint64 length
# uint32 masked_crc32_of_length
# byte data[length]
# uint32 masked_crc32_of_data
#
# masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
# # TFRecord files using tf.data
# ### Writing a TFRecord file
tf.data.Dataset.from_tensor_slices(feature1)
features_dataset = tf.data.Dataset.from_tensor_slices(
(feature0, feature1, feature2, feature3)
)
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0, f1, f2, f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
def tf_serialize_example(f0, f1, f2, f3):
tf_string = tf.py_function(
serialize_example,
(f0, f1, f2, f3), # Pass these args to the above function.
tf.string,
) # The return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0, f1, f2, f3)
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=()
)
serialized_features_dataset
filename = "test.tfrecord"
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
# ### Reading a TFRecord file
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
# Create a description of the features.
feature_description = {
"feature0": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"feature1": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"feature2": tf.io.FixedLenFeature([], tf.string, default_value=""),
"feature3": tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.train.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
# # TFRecord files in Python
# ### Writing a TFRecord file
# Write the `tf.train.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# ### Reading a TFRecord file
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
#
# Dict[str,
# Union[List[float],
# List[int],
# List[str]]]
#
result = {}
# example.features.feature is the dictionary
for key, feature in example.features.feature.items():
# The values are the Feature objects which contain a `kind` which contains:
# one of three fields: bytes_list, float_list, int64_list
kind = feature.WhichOneof("kind")
result[key] = np.array(getattr(feature, kind).value)
result
# # Walkthrough: Reading and writing image data
cat_in_snow = tf.keras.utils.get_file(
"320px-Felis_catus-cat_on_snow.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg",
)
williamsburg_bridge = tf.keras.utils.get_file(
"194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg",
)
display.display(display.Image(filename=cat_in_snow))
display.display(
display.HTML(
'Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'
)
)
display.display(display.Image(filename=williamsburg_bridge))
display.display(
display.HTML(
'<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'
)
)
# ## Write the TFRecord file
image_labels = {
cat_in_snow: 0,
williamsburg_bridge: 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, "rb").read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.io.decode_jpeg(image_string).shape
feature = {
"height": _int64_feature(image_shape[0]),
"width": _int64_feature(image_shape[1]),
"depth": _int64_feature(image_shape[2]),
"label": _int64_feature(label),
"image_raw": _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split("\n")[:15]:
print(line)
print("...")
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.train.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = "images.tfrecords"
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, "rb").read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# # Read the TFRecord file
raw_image_dataset = tf.data.TFRecordDataset("images.tfrecords")
# Create a dictionary describing the features.
image_feature_description = {
"height": tf.io.FixedLenFeature([], tf.int64),
"width": tf.io.FixedLenFeature([], tf.int64),
"depth": tf.io.FixedLenFeature([], tf.int64),
"label": tf.io.FixedLenFeature([], tf.int64),
"image_raw": tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.train.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
for image_features in parsed_image_dataset:
image_raw = image_features["image_raw"].numpy()
display.display(display.Image(data=image_raw))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/592/129592337.ipynb
| null | null |
[{"Id": 129592337, "ScriptId": 38533518, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13215358, "CreationDate": "05/15/2023 05:20:06", "VersionNumber": 1.0, "Title": "TensorFlow: TFRecord and tf.train.Example", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 280.0, "LinesInsertedFromPrevious": 280.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
| null | null | null | null |
# # Setup
import tensorflow as tf
import numpy as np
import IPython.display as display
# # `tf.train.Example`
# ### Data types for `tf.train.Example`
# The following functions can be used to convert a value to a type compatible
# with tf.train.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
print(_bytes_feature(b"test_string"))
print(_bytes_feature("test_bytes".encode("utf-8")))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
feature = _float_feature(np.exp(1))
feature.SerializeToString()
# ### Creating a `tf.train.Example` message
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature.
strings = np.array([b"cat", b"dog", b"chicken", b"horse", b"goat"])
feature2 = strings[feature1]
# Float feature, from a standard normal deviation.
feature3 = np.random.randn(n_observations)
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.train.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.train.Example-compatible
# data type.
feature = {
"feature0": _int64_feature(feature0),
"feature1": _int64_feature(feature1),
"feature2": _bytes_feature(feature2),
"feature3": _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b"goat", 0.9876)
serialized_example
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
# # TFRecords format details
# uint64 length
# uint32 masked_crc32_of_length
# byte data[length]
# uint32 masked_crc32_of_data
#
# masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
# # TFRecord files using tf.data
# ### Writing a TFRecord file
tf.data.Dataset.from_tensor_slices(feature1)
features_dataset = tf.data.Dataset.from_tensor_slices(
(feature0, feature1, feature2, feature3)
)
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0, f1, f2, f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
def tf_serialize_example(f0, f1, f2, f3):
tf_string = tf.py_function(
serialize_example,
(f0, f1, f2, f3), # Pass these args to the above function.
tf.string,
) # The return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0, f1, f2, f3)
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=()
)
serialized_features_dataset
filename = "test.tfrecord"
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
# ### Reading a TFRecord file
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
# Create a description of the features.
feature_description = {
"feature0": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"feature1": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"feature2": tf.io.FixedLenFeature([], tf.string, default_value=""),
"feature3": tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.train.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
# # TFRecord files in Python
# ### Writing a TFRecord file
# Write the `tf.train.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# ### Reading a TFRecord file
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
#
# Dict[str,
# Union[List[float],
# List[int],
# List[str]]]
#
result = {}
# example.features.feature is the dictionary
for key, feature in example.features.feature.items():
# The values are the Feature objects which contain a `kind` which contains:
# one of three fields: bytes_list, float_list, int64_list
kind = feature.WhichOneof("kind")
result[key] = np.array(getattr(feature, kind).value)
result
# # Walkthrough: Reading and writing image data
cat_in_snow = tf.keras.utils.get_file(
"320px-Felis_catus-cat_on_snow.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg",
)
williamsburg_bridge = tf.keras.utils.get_file(
"194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg",
)
display.display(display.Image(filename=cat_in_snow))
display.display(
display.HTML(
'Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'
)
)
display.display(display.Image(filename=williamsburg_bridge))
display.display(
display.HTML(
'<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'
)
)
# ## Write the TFRecord file
image_labels = {
cat_in_snow: 0,
williamsburg_bridge: 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, "rb").read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.io.decode_jpeg(image_string).shape
feature = {
"height": _int64_feature(image_shape[0]),
"width": _int64_feature(image_shape[1]),
"depth": _int64_feature(image_shape[2]),
"label": _int64_feature(label),
"image_raw": _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split("\n")[:15]:
print(line)
print("...")
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.train.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = "images.tfrecords"
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, "rb").read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# # Read the TFRecord file
raw_image_dataset = tf.data.TFRecordDataset("images.tfrecords")
# Create a dictionary describing the features.
image_feature_description = {
"height": tf.io.FixedLenFeature([], tf.int64),
"width": tf.io.FixedLenFeature([], tf.int64),
"depth": tf.io.FixedLenFeature([], tf.int64),
"label": tf.io.FixedLenFeature([], tf.int64),
"image_raw": tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.train.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
for image_features in parsed_image_dataset:
image_raw = image_features["image_raw"].numpy()
display.display(display.Image(data=image_raw))
| false | 0 | 2,782 | 4 | 2,782 | 2,782 |
||
129592558
|
<jupyter_start><jupyter_text>Medical Insurance Premium Prediction
### Context
A Medical Insurance Company Has Released Data For Almost 1000 Customers. Create A Model That Predicts The Yearly Medical Cover Cost. The Data Is Voluntarily Given By Customers.
### Content
The Dataset Contains Health Related Parameters Of The Customers. Use Them To Build A Model And Also Perform EDA On The Same.
The Premium Price Is In INR(₹) Currency And Showcases Prices For A Whole Year.
### Inspiration
Help Solve A Crucial Finance Problem That Would Potentially Impact Many People And Would Help Them Make Better Decisions.
Don't Forget To Submit Your EDAs And Models In The Task Section. These Will Be Keenly Reviewed
Hope You Enjoy Working On The Data.
note- This is a dummy dataset used for teaching and training purposes. It is free to use,
Image Credits-Unsplash
Kaggle dataset identifier: medical-insurance-premium-prediction
<jupyter_code>import pandas as pd
df = pd.read_csv('medical-insurance-premium-prediction/Medicalpremium.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 986 entries, 0 to 985
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 986 non-null int64
1 Diabetes 986 non-null int64
2 BloodPressureProblems 986 non-null int64
3 AnyTransplants 986 non-null int64
4 AnyChronicDiseases 986 non-null int64
5 Height 986 non-null int64
6 Weight 986 non-null int64
7 KnownAllergies 986 non-null int64
8 HistoryOfCancerInFamily 986 non-null int64
9 NumberOfMajorSurgeries 986 non-null int64
10 PremiumPrice 986 non-null int64
dtypes: int64(11)
memory usage: 84.9 KB
<jupyter_text>Examples:
{
"Age": 45,
"Diabetes": 0,
"BloodPressureProblems": 0,
"AnyTransplants": 0,
"AnyChronicDiseases": 0,
"Height": 155,
"Weight": 57,
"KnownAllergies": 0,
"HistoryOfCancerInFamily": 0,
"NumberOfMajorSurgeries": 0,
"PremiumPrice": 25000
}
{
"Age": 60,
"Diabetes": 1,
"BloodPressureProblems": 0,
"AnyTransplants": 0,
"AnyChronicDiseases": 0,
"Height": 180,
"Weight": 73,
"KnownAllergies": 0,
"HistoryOfCancerInFamily": 0,
"NumberOfMajorSurgeries": 0,
"PremiumPrice": 29000
}
{
"Age": 36,
"Diabetes": 1,
"BloodPressureProblems": 1,
"AnyTransplants": 0,
"AnyChronicDiseases": 0,
"Height": 158,
"Weight": 59,
"KnownAllergies": 0,
"HistoryOfCancerInFamily": 0,
"NumberOfMajorSurgeries": 1,
"PremiumPrice": 23000
}
{
"Age": 52,
"Diabetes": 1,
"BloodPressureProblems": 1,
"AnyTransplants": 0,
"AnyChronicDiseases": 1,
"Height": 183,
"Weight": 93,
"KnownAllergies": 0,
"HistoryOfCancerInFamily": 0,
"NumberOfMajorSurgeries": 2,
"PremiumPrice": 28000
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Importing Libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.pipeline import Pipeline
from sklearn.linear_model import RidgeCV, LassoCV, ElasticNetCV
import warnings
warnings.filterwarnings("ignore")
# # Importing Data
data = pd.read_csv(
"/kaggle/input/medical-insurance-premium-prediction/Medicalpremium.csv"
)
data.head()
# # EDA
data.info()
data.describe()
from ydata_profiling import ProfileReport
profile = ProfileReport(data, title="Medical Insurance")
profile.to_notebook_iframe()
# regression plots of price against all features
cols = [col for col in data.columns if col != "PremiumPrice"]
for i in cols:
plt.figure(figsize=(16, 8))
sns.regplot(
x=data[i],
y=data.PremiumPrice,
data=data,
line_kws={"color": "red"},
scatter_kws={"color": "blue"},
)
plt.show()
print("-" * 128)
# # Linear Regression Models
# The function below will plot the distribution of two inputs.
#
def plot_dis(y, yhat):
plt.figure()
ax1 = sns.distplot(y, hist=False, color="r", label="Actual Value")
sns.distplot(yhat, hist=False, color="b", label="Fitted Values", ax=ax1)
plt.legend()
plt.title("Actual vs Fitted Values")
plt.xlabel("Price (in rupees)")
plt.ylabel("Proportion of records")
plt.show()
plt.close()
rmse_df = pd.DataFrame(columns=["Model", "RMSE"])
cols = [col for col in data.columns if col != "PremiumPrice"]
X = data[cols]
X.head()
y = data["PremiumPrice"]
y.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print("Number of test samples:", X_test.shape[0])
print("Number of training samples:", X_train.shape[0])
# ### without Scaling
lr = LinearRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", lr.score(X_train, y_train))
print("R^2 on testing data ", lr.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_without_Scaling", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# ### with Scaling
steps = [("scaler", StandardScaler()), ("lm", LinearRegression())]
pipe = Pipeline(steps=steps)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", pipe.score(X_train, y_train))
print("R^2 on testing data ", pipe.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_with_Scaling(SS)", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# From above we can see that there is no difference with and without scaling, so we can continue in either ways
plot_dis(y_test, y_pred)
# ## Polynomial Regression
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("model", LinearRegression()),
]
pipe = Pipeline(Input)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", pipe.score(X_train, y_train))
print("R^2 on testing data ", pipe.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_PolynomialFeatures(PF)", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# ## Polynomial Regression with StandardScaler, GridSearchCV
Input = [
("scaler", StandardScaler()),
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("model", LinearRegression()),
]
pipe = Pipeline(Input)
param_grid = {"polynomial__degree": [1, 2, 3]}
search = GridSearchCV(pipe, param_grid, n_jobs=1)
pipe.fit(X_train, y_train)
search.fit(X_test, y_test)
best = search.best_estimator_
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
def rmse(ytrue, ypredicted):
return np.sqrt(mean_squared_error(ytrue, ypredicted))
# ## Lasso Regression
from sklearn.linear_model import LassoCV
alphas2 = np.array([1e-5, 5e-5, 0.0001, 0.0005, 0.001, 0.01, 0.1, 1, 10, 100])
lassoCV = LassoCV(alphas=alphas2, max_iter=50000, cv=3).fit(X_train, y_train)
y_pred = lassoCV.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("alpha : ", lassoCV.alpha_)
print("R^2 on training data ", lassoCV.score(X_train, y_train))
print("R^2 on testing data ", lassoCV.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Lasso", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
print(
"Of {} coefficients, {} are non-zero with Lasso.".format(
len(lassoCV.coef_), len(lassoCV.coef_.nonzero()[0])
)
)
# ## Lasso with Polynomial Features, StandardScaler and GridSearchCV
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("ss", StandardScaler()),
("model", Lasso(alpha=1, tol=0.2)),
]
pipe = Pipeline(Input)
param_grid = {
"polynomial__degree": [1, 2, 3, 4, 5, 6],
"model__alpha": [1e-5, 5e-5, 0.0001, 0.0005, 0.001, 0.01, 0.1, 1, 10],
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_train, y_train)
best = search.best_estimator_
print("best_score_: ", search.best_score_)
print("best_params_: ", search.best_params_)
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Lasso_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# # Ridge Regression
from sklearn.linear_model import RidgeCV
alphas = [0.005, 0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 80]
ridgeCV = RidgeCV(alphas=alphas, cv=4).fit(X_train, y_train)
y_pred = ridgeCV.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("alpha : ", ridgeCV.alpha_)
print("R^2 on training data ", ridgeCV.score(X_train, y_train))
print("R^2 on testing data ", ridgeCV.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Ridge", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# ## Ridge with Polynomial Features, StandardScaler and GridSearchCV
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("ss", StandardScaler()),
("model", Ridge(alpha=1)),
]
pipe = Pipeline(Input)
param_grid = {
"polynomial__degree": [1, 2, 3, 4, 5, 6],
"model__alpha": [0.0001, 0.001, 0.01, 0.1, 1, 10],
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_train, y_train)
best = search.best_estimator_
print("best_score_: ", search.best_score_)
print("best_params_: ", search.best_params_)
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Ridge_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# # ElasticNet
from sklearn.linear_model import ElasticNetCV
l1_ratios = np.linspace(0.1, 0.9, 9)
elasticNetCV = ElasticNetCV(alphas=alphas2, l1_ratio=l1_ratios, max_iter=10000).fit(
X_train, y_train
)
y_pred = elasticNetCV.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("alpha : ", elasticNetCV.alpha_, "l1_ratio : ", elasticNetCV.l1_ratio_)
print("R^2 on training data ", elasticNetCV.score(X_train, y_train))
print("R^2 on testing data ", elasticNetCV.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "ElasticNet", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# ## ElasticNet with Polynomial Features, StandardScaler and GridSearchCV
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("scaler", StandardScaler()),
("model", ElasticNet(tol=0.2, alpha=0.1, l1_ratio=0.1)),
]
pipe = Pipeline(Input)
param_grid = {
"polynomial__degree": [1, 2, 3, 4],
"model__alpha": [0.0001, 0.001, 0.01, 0.1, 1, 10],
"model__l1_ratio": [0.1, 0.5, 0.9],
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_train, y_train)
best = search.best_estimator_
print("best_score_: ", search.best_score_)
print("best_params_: ", search.best_params_)
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "ElasticNet_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# # Insights
rmse_df
rmse_df[rmse_df["RMSE"] == rmse_df.RMSE.min()]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/592/129592558.ipynb
|
medical-insurance-premium-prediction
|
tejashvi14
|
[{"Id": 129592558, "ScriptId": 38532185, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12654687, "CreationDate": "05/15/2023 05:22:55", "VersionNumber": 1.0, "Title": "Medical Insurance Prediction", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 350.0, "LinesInsertedFromPrevious": 350.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
|
[{"Id": 185819026, "KernelVersionId": 129592558, "SourceDatasetVersionId": 2497017}]
|
[{"Id": 2497017, "DatasetId": 1507683, "DatasourceVersionId": 2539640, "CreatorUserId": 5472192, "LicenseName": "CC0: Public Domain", "CreationDate": "08/04/2021 05:48:58", "VersionNumber": 2.0, "Title": "Medical Insurance Premium Prediction", "Slug": "medical-insurance-premium-prediction", "Subtitle": "Predict Yearly Medical Cover Cost(\u20b9)", "Description": "### Context\n\nA Medical Insurance Company Has Released Data For Almost 1000 Customers. Create A Model That Predicts The Yearly Medical Cover Cost. The Data Is Voluntarily Given By Customers.\n\n\n### Content\n\nThe Dataset Contains Health Related Parameters Of The Customers. Use Them To Build A Model And Also Perform EDA On The Same. \nThe Premium Price Is In INR(\u20b9) Currency And Showcases Prices For A Whole Year.\n\n### Inspiration\n\nHelp Solve A Crucial Finance Problem That Would Potentially Impact Many People And Would Help Them Make Better Decisions.\nDon't Forget To Submit Your EDAs And Models In The Task Section. These Will Be Keenly Reviewed\nHope You Enjoy Working On The Data.\nnote- This is a dummy dataset used for teaching and training purposes. It is free to use,\nImage Credits-Unsplash", "VersionNotes": "Data Update 2021/08/04", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1507683, "CreatorUserId": 5472192, "OwnerUserId": 5472192.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2497017.0, "CurrentDatasourceVersionId": 2539640.0, "ForumId": 1527430, "Type": 2, "CreationDate": "08/02/2021 07:49:44", "LastActivityDate": "08/02/2021", "TotalViews": 47770, "TotalDownloads": 5196, "TotalVotes": 88, "TotalKernels": 17}]
|
[{"Id": 5472192, "UserName": "tejashvi14", "DisplayName": "Tejashvi", "RegisterDate": "07/15/2020", "PerformanceTier": 3}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Importing Libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.pipeline import Pipeline
from sklearn.linear_model import RidgeCV, LassoCV, ElasticNetCV
import warnings
warnings.filterwarnings("ignore")
# # Importing Data
data = pd.read_csv(
"/kaggle/input/medical-insurance-premium-prediction/Medicalpremium.csv"
)
data.head()
# # EDA
data.info()
data.describe()
from ydata_profiling import ProfileReport
profile = ProfileReport(data, title="Medical Insurance")
profile.to_notebook_iframe()
# regression plots of price against all features
cols = [col for col in data.columns if col != "PremiumPrice"]
for i in cols:
plt.figure(figsize=(16, 8))
sns.regplot(
x=data[i],
y=data.PremiumPrice,
data=data,
line_kws={"color": "red"},
scatter_kws={"color": "blue"},
)
plt.show()
print("-" * 128)
# # Linear Regression Models
# The function below will plot the distribution of two inputs.
#
def plot_dis(y, yhat):
plt.figure()
ax1 = sns.distplot(y, hist=False, color="r", label="Actual Value")
sns.distplot(yhat, hist=False, color="b", label="Fitted Values", ax=ax1)
plt.legend()
plt.title("Actual vs Fitted Values")
plt.xlabel("Price (in rupees)")
plt.ylabel("Proportion of records")
plt.show()
plt.close()
rmse_df = pd.DataFrame(columns=["Model", "RMSE"])
cols = [col for col in data.columns if col != "PremiumPrice"]
X = data[cols]
X.head()
y = data["PremiumPrice"]
y.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print("Number of test samples:", X_test.shape[0])
print("Number of training samples:", X_train.shape[0])
# ### without Scaling
lr = LinearRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", lr.score(X_train, y_train))
print("R^2 on testing data ", lr.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_without_Scaling", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# ### with Scaling
steps = [("scaler", StandardScaler()), ("lm", LinearRegression())]
pipe = Pipeline(steps=steps)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", pipe.score(X_train, y_train))
print("R^2 on testing data ", pipe.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_with_Scaling(SS)", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# From above we can see that there is no difference with and without scaling, so we can continue in either ways
plot_dis(y_test, y_pred)
# ## Polynomial Regression
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("model", LinearRegression()),
]
pipe = Pipeline(Input)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", pipe.score(X_train, y_train))
print("R^2 on testing data ", pipe.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_PolynomialFeatures(PF)", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# ## Polynomial Regression with StandardScaler, GridSearchCV
Input = [
("scaler", StandardScaler()),
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("model", LinearRegression()),
]
pipe = Pipeline(Input)
param_grid = {"polynomial__degree": [1, 2, 3]}
search = GridSearchCV(pipe, param_grid, n_jobs=1)
pipe.fit(X_train, y_train)
search.fit(X_test, y_test)
best = search.best_estimator_
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "LR_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
def rmse(ytrue, ypredicted):
return np.sqrt(mean_squared_error(ytrue, ypredicted))
# ## Lasso Regression
from sklearn.linear_model import LassoCV
alphas2 = np.array([1e-5, 5e-5, 0.0001, 0.0005, 0.001, 0.01, 0.1, 1, 10, 100])
lassoCV = LassoCV(alphas=alphas2, max_iter=50000, cv=3).fit(X_train, y_train)
y_pred = lassoCV.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("alpha : ", lassoCV.alpha_)
print("R^2 on training data ", lassoCV.score(X_train, y_train))
print("R^2 on testing data ", lassoCV.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Lasso", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
print(
"Of {} coefficients, {} are non-zero with Lasso.".format(
len(lassoCV.coef_), len(lassoCV.coef_.nonzero()[0])
)
)
# ## Lasso with Polynomial Features, StandardScaler and GridSearchCV
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("ss", StandardScaler()),
("model", Lasso(alpha=1, tol=0.2)),
]
pipe = Pipeline(Input)
param_grid = {
"polynomial__degree": [1, 2, 3, 4, 5, 6],
"model__alpha": [1e-5, 5e-5, 0.0001, 0.0005, 0.001, 0.01, 0.1, 1, 10],
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_train, y_train)
best = search.best_estimator_
print("best_score_: ", search.best_score_)
print("best_params_: ", search.best_params_)
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Lasso_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# # Ridge Regression
from sklearn.linear_model import RidgeCV
alphas = [0.005, 0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 80]
ridgeCV = RidgeCV(alphas=alphas, cv=4).fit(X_train, y_train)
y_pred = ridgeCV.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("alpha : ", ridgeCV.alpha_)
print("R^2 on training data ", ridgeCV.score(X_train, y_train))
print("R^2 on testing data ", ridgeCV.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Ridge", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# ## Ridge with Polynomial Features, StandardScaler and GridSearchCV
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("ss", StandardScaler()),
("model", Ridge(alpha=1)),
]
pipe = Pipeline(Input)
param_grid = {
"polynomial__degree": [1, 2, 3, 4, 5, 6],
"model__alpha": [0.0001, 0.001, 0.01, 0.1, 1, 10],
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_train, y_train)
best = search.best_estimator_
print("best_score_: ", search.best_score_)
print("best_params_: ", search.best_params_)
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "Ridge_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# # ElasticNet
from sklearn.linear_model import ElasticNetCV
l1_ratios = np.linspace(0.1, 0.9, 9)
elasticNetCV = ElasticNetCV(alphas=alphas2, l1_ratio=l1_ratios, max_iter=10000).fit(
X_train, y_train
)
y_pred = elasticNetCV.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("alpha : ", elasticNetCV.alpha_, "l1_ratio : ", elasticNetCV.l1_ratio_)
print("R^2 on training data ", elasticNetCV.score(X_train, y_train))
print("R^2 on testing data ", elasticNetCV.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "ElasticNet", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
# ## ElasticNet with Polynomial Features, StandardScaler and GridSearchCV
Input = [
("polynomial", PolynomialFeatures(include_bias=False, degree=2)),
("scaler", StandardScaler()),
("model", ElasticNet(tol=0.2, alpha=0.1, l1_ratio=0.1)),
]
pipe = Pipeline(Input)
param_grid = {
"polynomial__degree": [1, 2, 3, 4],
"model__alpha": [0.0001, 0.001, 0.01, 0.1, 1, 10],
"model__l1_ratio": [0.1, 0.5, 0.9],
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_train, y_train)
best = search.best_estimator_
print("best_score_: ", search.best_score_)
print("best_params_: ", search.best_params_)
best
y_pred = best.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print("R^2 on training data ", best.score(X_train, y_train))
print("R^2 on testing data ", best.score(X_test, y_test))
print("MSE : ", mse)
print("RMSE : ", np.sqrt(mse))
print("R2_score : ", r2_score(y_pred, y_test))
new_row = {"Model": "ElasticNet_PF_SS", "RMSE": np.sqrt(mse)}
rmse_df = rmse_df.append(new_row, ignore_index=True)
plot_dis(y_test, y_pred)
# # Insights
rmse_df
rmse_df[rmse_df["RMSE"] == rmse_df.RMSE.min()]
|
[{"medical-insurance-premium-prediction/Medicalpremium.csv": {"column_names": "[\"Age\", \"Diabetes\", \"BloodPressureProblems\", \"AnyTransplants\", \"AnyChronicDiseases\", \"Height\", \"Weight\", \"KnownAllergies\", \"HistoryOfCancerInFamily\", \"NumberOfMajorSurgeries\", \"PremiumPrice\"]", "column_data_types": "{\"Age\": \"int64\", \"Diabetes\": \"int64\", \"BloodPressureProblems\": \"int64\", \"AnyTransplants\": \"int64\", \"AnyChronicDiseases\": \"int64\", \"Height\": \"int64\", \"Weight\": \"int64\", \"KnownAllergies\": \"int64\", \"HistoryOfCancerInFamily\": \"int64\", \"NumberOfMajorSurgeries\": \"int64\", \"PremiumPrice\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 986 entries, 0 to 985\nData columns (total 11 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 Age 986 non-null int64\n 1 Diabetes 986 non-null int64\n 2 BloodPressureProblems 986 non-null int64\n 3 AnyTransplants 986 non-null int64\n 4 AnyChronicDiseases 986 non-null int64\n 5 Height 986 non-null int64\n 6 Weight 986 non-null int64\n 7 KnownAllergies 986 non-null int64\n 8 HistoryOfCancerInFamily 986 non-null int64\n 9 NumberOfMajorSurgeries 986 non-null int64\n 10 PremiumPrice 986 non-null int64\ndtypes: int64(11)\nmemory usage: 84.9 KB\n", "summary": "{\"Age\": {\"count\": 986.0, \"mean\": 41.74543610547667, \"std\": 13.963371389855682, \"min\": 18.0, \"25%\": 30.0, \"50%\": 42.0, \"75%\": 53.0, \"max\": 66.0}, \"Diabetes\": {\"count\": 986.0, \"mean\": 0.4198782961460446, \"std\": 0.49378922875252945, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"BloodPressureProblems\": {\"count\": 986.0, \"mean\": 0.4685598377281947, \"std\": 0.49926377774285313, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"AnyTransplants\": {\"count\": 986.0, \"mean\": 0.055780933062880324, \"std\": 0.22961465994678726, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"AnyChronicDiseases\": {\"count\": 986.0, \"mean\": 0.18052738336713997, \"std\": 0.3848213056997442, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"Height\": {\"count\": 986.0, \"mean\": 168.18255578093306, \"std\": 10.098154827654469, \"min\": 145.0, \"25%\": 161.0, \"50%\": 168.0, \"75%\": 176.0, \"max\": 188.0}, \"Weight\": {\"count\": 986.0, \"mean\": 76.95030425963489, \"std\": 14.265095839082017, \"min\": 51.0, \"25%\": 67.0, \"50%\": 75.0, \"75%\": 87.0, \"max\": 132.0}, \"KnownAllergies\": {\"count\": 986.0, \"mean\": 0.2150101419878296, \"std\": 0.41103787158451843, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"HistoryOfCancerInFamily\": {\"count\": 986.0, \"mean\": 0.11764705882352941, \"std\": 0.3223532463115337, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"NumberOfMajorSurgeries\": {\"count\": 986.0, \"mean\": 0.6673427991886409, \"std\": 0.749204951277794, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 3.0}, \"PremiumPrice\": {\"count\": 986.0, \"mean\": 24336.713995943206, \"std\": 6248.184382239677, \"min\": 15000.0, \"25%\": 21000.0, \"50%\": 23000.0, \"75%\": 28000.0, \"max\": 40000.0}}", "examples": "{\"Age\":{\"0\":45,\"1\":60,\"2\":36,\"3\":52},\"Diabetes\":{\"0\":0,\"1\":1,\"2\":1,\"3\":1},\"BloodPressureProblems\":{\"0\":0,\"1\":0,\"2\":1,\"3\":1},\"AnyTransplants\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"AnyChronicDiseases\":{\"0\":0,\"1\":0,\"2\":0,\"3\":1},\"Height\":{\"0\":155,\"1\":180,\"2\":158,\"3\":183},\"Weight\":{\"0\":57,\"1\":73,\"2\":59,\"3\":93},\"KnownAllergies\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"HistoryOfCancerInFamily\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"NumberOfMajorSurgeries\":{\"0\":0,\"1\":0,\"2\":1,\"3\":2},\"PremiumPrice\":{\"0\":25000,\"1\":29000,\"2\":23000,\"3\":28000}}"}}]
| true | 1 |
<start_data_description><data_path>medical-insurance-premium-prediction/Medicalpremium.csv:
<column_names>
['Age', 'Diabetes', 'BloodPressureProblems', 'AnyTransplants', 'AnyChronicDiseases', 'Height', 'Weight', 'KnownAllergies', 'HistoryOfCancerInFamily', 'NumberOfMajorSurgeries', 'PremiumPrice']
<column_types>
{'Age': 'int64', 'Diabetes': 'int64', 'BloodPressureProblems': 'int64', 'AnyTransplants': 'int64', 'AnyChronicDiseases': 'int64', 'Height': 'int64', 'Weight': 'int64', 'KnownAllergies': 'int64', 'HistoryOfCancerInFamily': 'int64', 'NumberOfMajorSurgeries': 'int64', 'PremiumPrice': 'int64'}
<dataframe_Summary>
{'Age': {'count': 986.0, 'mean': 41.74543610547667, 'std': 13.963371389855682, 'min': 18.0, '25%': 30.0, '50%': 42.0, '75%': 53.0, 'max': 66.0}, 'Diabetes': {'count': 986.0, 'mean': 0.4198782961460446, 'std': 0.49378922875252945, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'BloodPressureProblems': {'count': 986.0, 'mean': 0.4685598377281947, 'std': 0.49926377774285313, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'AnyTransplants': {'count': 986.0, 'mean': 0.055780933062880324, 'std': 0.22961465994678726, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'AnyChronicDiseases': {'count': 986.0, 'mean': 0.18052738336713997, 'std': 0.3848213056997442, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'Height': {'count': 986.0, 'mean': 168.18255578093306, 'std': 10.098154827654469, 'min': 145.0, '25%': 161.0, '50%': 168.0, '75%': 176.0, 'max': 188.0}, 'Weight': {'count': 986.0, 'mean': 76.95030425963489, 'std': 14.265095839082017, 'min': 51.0, '25%': 67.0, '50%': 75.0, '75%': 87.0, 'max': 132.0}, 'KnownAllergies': {'count': 986.0, 'mean': 0.2150101419878296, 'std': 0.41103787158451843, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'HistoryOfCancerInFamily': {'count': 986.0, 'mean': 0.11764705882352941, 'std': 0.3223532463115337, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'NumberOfMajorSurgeries': {'count': 986.0, 'mean': 0.6673427991886409, 'std': 0.749204951277794, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 3.0}, 'PremiumPrice': {'count': 986.0, 'mean': 24336.713995943206, 'std': 6248.184382239677, 'min': 15000.0, '25%': 21000.0, '50%': 23000.0, '75%': 28000.0, 'max': 40000.0}}
<dataframe_info>
RangeIndex: 986 entries, 0 to 985
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Age 986 non-null int64
1 Diabetes 986 non-null int64
2 BloodPressureProblems 986 non-null int64
3 AnyTransplants 986 non-null int64
4 AnyChronicDiseases 986 non-null int64
5 Height 986 non-null int64
6 Weight 986 non-null int64
7 KnownAllergies 986 non-null int64
8 HistoryOfCancerInFamily 986 non-null int64
9 NumberOfMajorSurgeries 986 non-null int64
10 PremiumPrice 986 non-null int64
dtypes: int64(11)
memory usage: 84.9 KB
<some_examples>
{'Age': {'0': 45, '1': 60, '2': 36, '3': 52}, 'Diabetes': {'0': 0, '1': 1, '2': 1, '3': 1}, 'BloodPressureProblems': {'0': 0, '1': 0, '2': 1, '3': 1}, 'AnyTransplants': {'0': 0, '1': 0, '2': 0, '3': 0}, 'AnyChronicDiseases': {'0': 0, '1': 0, '2': 0, '3': 1}, 'Height': {'0': 155, '1': 180, '2': 158, '3': 183}, 'Weight': {'0': 57, '1': 73, '2': 59, '3': 93}, 'KnownAllergies': {'0': 0, '1': 0, '2': 0, '3': 0}, 'HistoryOfCancerInFamily': {'0': 0, '1': 0, '2': 0, '3': 0}, 'NumberOfMajorSurgeries': {'0': 0, '1': 0, '2': 1, '3': 2}, 'PremiumPrice': {'0': 25000, '1': 29000, '2': 23000, '3': 28000}}
<end_description>
| 4,020 | 5 | 5,031 | 4,020 |
129471011
|
import pandas as pd
data = pd.read_csv("/kaggle/input/review/IMDB Dataset.csv")
data.head()
import nltk
def fun(text):
t = nltk.word_tokenize(text)
return t
data["tokens"] = data["review"].apply(lambda x: fun(x))
data.head()
from nltk.corpus import stopwords
nltk.download("stopwords")
sw = set(stopwords.words("english"))
def nfun(text):
t = [word for word in text if word.lower() not in sw]
return t
data["filteredtokens"] = data["tokens"].apply(lambda x: nfun(x))
data.head()
data = data.drop(["review", "tokens"], axis=1)
data = pd.get_dummies(columns=["sentiment"], data=data)
def merge(l):
s = " ".join(l)
return s
data["filteredtokens"] = data["filteredtokens"].apply(lambda x: merge(x))
from sklearn.model_selection import train_test_split
x = data["filteredtokens"]
y = data.drop(["filteredtokens"], axis=1)
from sklearn.feature_extraction.text import CountVectorizer
v = CountVectorizer()
x = v.fit_transform(x)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(x_train, y_train)
y = model.predict(x_test)
model.score(x_test, y_test)
y
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/471/129471011.ipynb
| null | null |
[{"Id": 129471011, "ScriptId": 38358585, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14115484, "CreationDate": "05/14/2023 05:39:25", "VersionNumber": 1.0, "Title": "review analysis", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 65.0, "LinesInsertedFromPrevious": 65.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
data = pd.read_csv("/kaggle/input/review/IMDB Dataset.csv")
data.head()
import nltk
def fun(text):
t = nltk.word_tokenize(text)
return t
data["tokens"] = data["review"].apply(lambda x: fun(x))
data.head()
from nltk.corpus import stopwords
nltk.download("stopwords")
sw = set(stopwords.words("english"))
def nfun(text):
t = [word for word in text if word.lower() not in sw]
return t
data["filteredtokens"] = data["tokens"].apply(lambda x: nfun(x))
data.head()
data = data.drop(["review", "tokens"], axis=1)
data = pd.get_dummies(columns=["sentiment"], data=data)
def merge(l):
s = " ".join(l)
return s
data["filteredtokens"] = data["filteredtokens"].apply(lambda x: merge(x))
from sklearn.model_selection import train_test_split
x = data["filteredtokens"]
y = data.drop(["filteredtokens"], axis=1)
from sklearn.feature_extraction.text import CountVectorizer
v = CountVectorizer()
x = v.fit_transform(x)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(x_train, y_train)
y = model.predict(x_test)
model.score(x_test, y_test)
y
| false | 0 | 403 | 0 | 403 | 403 |
||
129471331
|
# This is the overview class
"""Data Analytics class
with Python - Weekends
in FITA Academy"""
print("Welcome to Data Analytics")
a = 5
print(a)
b = 2.8
c = "Data"
print(type(a))
print(type(b))
print(type(c))
print(a // b)
print(a**2)
# Comparison (<,<=,>,>=,==,!=)
d = 5
print(a < b)
print(a <= b)
print(a > b)
print(a >= b)
print(a == b)
print(a == d)
print(a != b)
# Assignment - =,+=,-=,*=,/=,%=,**=,//=
a += b # (a=a+b)
print(a)
a -= b
print(a)
a *= b
print(a)
a /= b
print(a)
a %= b
print(a)
a **= b
print(a)
a //= b
print(a)
# Logical - and, or, not
print(b and a)
print(a or b)
print(not b)
# Membership operator - in, not in
fruits = ["lemon", "jack", "grapes", "lichie"]
print("lemon" in fruits)
print("venilla" in fruits)
print("Fig" not in fruits)
# Identity operator - is, is not
e = 5
print(d is e)
print(c is e)
print(c is not e)
# Bitwise operators - &, |, ^, ~
g = 8
h = 1
print(d & f)
print(f | g)
print(h ^ f)
print(~g)
print(~1)
print(a * b**2 + 3)
# Python Numbers - int,float
id = int(10)
print(type(id))
no = 99.5
print(type(no))
print(oct(id))
print(hex(id))
id = id + no
print(id, type(id))
no = no + 0.5
print(no, type(no))
no = int(no)
print(no, type(no))
import math as m
km = 345.76
print(m.ceil(km))
print(m.floor(km))
print(m.factorial(5))
print(m.fabs(-11))
print(m.trunc(-11.11))
print(m.pow(2, 3))
print(m.log(5))
# Strings
course = "Python"
spe_cou = "Data Analytics with Python"
stu = course + spe_cou
print(stu)
course * 3
print("f" in spe_cou)
print(ord("d"))
print(ord("#"))
print(chr(100))
print(chr(35))
print(len(course))
print(len(spe_cou))
print(str("Fita"))
print(str(45.5))
print(str(10 + 3))
print(course[5])
print(course[-2])
print(course[0:4])
print(course[2:])
print(course[:4])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/471/129471331.ipynb
| null | null |
[{"Id": 129471331, "ScriptId": 38464017, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3854517, "CreationDate": "05/14/2023 05:43:30", "VersionNumber": 2.0, "Title": "Data Analytics", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 150.0, "LinesInsertedFromPrevious": 142.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 8.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# This is the overview class
"""Data Analytics class
with Python - Weekends
in FITA Academy"""
print("Welcome to Data Analytics")
a = 5
print(a)
b = 2.8
c = "Data"
print(type(a))
print(type(b))
print(type(c))
print(a // b)
print(a**2)
# Comparison (<,<=,>,>=,==,!=)
d = 5
print(a < b)
print(a <= b)
print(a > b)
print(a >= b)
print(a == b)
print(a == d)
print(a != b)
# Assignment - =,+=,-=,*=,/=,%=,**=,//=
a += b # (a=a+b)
print(a)
a -= b
print(a)
a *= b
print(a)
a /= b
print(a)
a %= b
print(a)
a **= b
print(a)
a //= b
print(a)
# Logical - and, or, not
print(b and a)
print(a or b)
print(not b)
# Membership operator - in, not in
fruits = ["lemon", "jack", "grapes", "lichie"]
print("lemon" in fruits)
print("venilla" in fruits)
print("Fig" not in fruits)
# Identity operator - is, is not
e = 5
print(d is e)
print(c is e)
print(c is not e)
# Bitwise operators - &, |, ^, ~
g = 8
h = 1
print(d & f)
print(f | g)
print(h ^ f)
print(~g)
print(~1)
print(a * b**2 + 3)
# Python Numbers - int,float
id = int(10)
print(type(id))
no = 99.5
print(type(no))
print(oct(id))
print(hex(id))
id = id + no
print(id, type(id))
no = no + 0.5
print(no, type(no))
no = int(no)
print(no, type(no))
import math as m
km = 345.76
print(m.ceil(km))
print(m.floor(km))
print(m.factorial(5))
print(m.fabs(-11))
print(m.trunc(-11.11))
print(m.pow(2, 3))
print(m.log(5))
# Strings
course = "Python"
spe_cou = "Data Analytics with Python"
stu = course + spe_cou
print(stu)
course * 3
print("f" in spe_cou)
print(ord("d"))
print(ord("#"))
print(chr(100))
print(chr(35))
print(len(course))
print(len(spe_cou))
print(str("Fita"))
print(str(45.5))
print(str(10 + 3))
print(course[5])
print(course[-2])
print(course[0:4])
print(course[2:])
print(course[:4])
| false | 0 | 798 | 0 | 798 | 798 |
||
129576488
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
import seaborn as sns
import matplotlib.pyplot as plt
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
df = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
df.head()
df.hist()
df["Pollinators"] = df.honeybee + df.bumbles + df.andrena + df.osmia
plt.figure(figsize=(20, 20))
sns.heatmap(df.corr(), annot=True, mask=np.zeros_like(df.corr(), dtype=bool))
sample = pd.read_csv("/kaggle/input/playground-series-s3e14/sample_submission.csv")
sample.head()
# remove the id because it has an natural structure and should not explain the crop yields
crop_yield_train = df.pop("yield")
train_id = df.pop("id")
X_train, X_test, y_train, y_test = train_test_split(df, crop_yield_train)
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_test, y_test)
test_df = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
test_id = test_df.pop("id")
test_model = LinearRegression()
test_model.fit(df, crop_yield_train)
pred = test_model.predict(test_df)
ret_df = pd.DataFrame({"id": test_id, "yield": pred})
ret_df.head()
ret_df.to_csv("crop_yield_submission.csv", index=False)
# ## With Scaling
# initialize the scaler and fit transform the explanatory variables
scaler = StandardScaler()
df_scaled = df.copy()
df_scaled = scaler.fit_transform(df_scaled)
# get performance of the model on the training set
X_train, X_test, y_train, y_test = train_test_split(df_scaled, crop_yield_train)
model_scale = LinearRegression()
model_scale.fit(X_train, y_train)
model_scale.score(X_test, y_test)
# # As expected since the data was normal to start with using the standard scaler does not effect performance positivly
test_df_scale = test_df.copy()
test_df_scale = scaler.fit_transform(test_df_scale)
model_scale.fit(df_scaled, crop_yield_train)
scale_pred = model_scale.predict(test_df_scale)
ret_df_scale = pd.DataFrame({"id": test_id, "yield": scale_pred})
ret_df.to_csv("crop_yield_submission_scaled.csv", index=False)
# ## Random Forest
classifier = RandomForestRegressor()
X_train, X_test, y_train, y_test = train_test_split(df, crop_yield_train)
classifier.fit(X_train, y_train)
classifier.score(X_test, y_test)
classifier.fit(df, crop_yield_train)
pred = classifier.predict(test_df)
random_forest_df = pd.DataFrame({"id": test_id, "yield": pred})
random_forest_df.to_csv("crop_yield_submission_random_forest.csv", index=False)
# # Lasso Regression
Lasso_clf = Lasso(2)
Lasso_clf.fit(X_train, y_train)
Lasso_clf.score(X_test, y_test)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/576/129576488.ipynb
| null | null |
[{"Id": 129576488, "ScriptId": 38037997, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12180553, "CreationDate": "05/15/2023 01:44:40", "VersionNumber": 1.0, "Title": "Crop Yields", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 91.0, "LinesInsertedFromPrevious": 91.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
import seaborn as sns
import matplotlib.pyplot as plt
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
df = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
df.head()
df.hist()
df["Pollinators"] = df.honeybee + df.bumbles + df.andrena + df.osmia
plt.figure(figsize=(20, 20))
sns.heatmap(df.corr(), annot=True, mask=np.zeros_like(df.corr(), dtype=bool))
sample = pd.read_csv("/kaggle/input/playground-series-s3e14/sample_submission.csv")
sample.head()
# remove the id because it has an natural structure and should not explain the crop yields
crop_yield_train = df.pop("yield")
train_id = df.pop("id")
X_train, X_test, y_train, y_test = train_test_split(df, crop_yield_train)
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_test, y_test)
test_df = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
test_id = test_df.pop("id")
test_model = LinearRegression()
test_model.fit(df, crop_yield_train)
pred = test_model.predict(test_df)
ret_df = pd.DataFrame({"id": test_id, "yield": pred})
ret_df.head()
ret_df.to_csv("crop_yield_submission.csv", index=False)
# ## With Scaling
# initialize the scaler and fit transform the explanatory variables
scaler = StandardScaler()
df_scaled = df.copy()
df_scaled = scaler.fit_transform(df_scaled)
# get performance of the model on the training set
X_train, X_test, y_train, y_test = train_test_split(df_scaled, crop_yield_train)
model_scale = LinearRegression()
model_scale.fit(X_train, y_train)
model_scale.score(X_test, y_test)
# # As expected since the data was normal to start with using the standard scaler does not effect performance positivly
test_df_scale = test_df.copy()
test_df_scale = scaler.fit_transform(test_df_scale)
model_scale.fit(df_scaled, crop_yield_train)
scale_pred = model_scale.predict(test_df_scale)
ret_df_scale = pd.DataFrame({"id": test_id, "yield": scale_pred})
ret_df.to_csv("crop_yield_submission_scaled.csv", index=False)
# ## Random Forest
classifier = RandomForestRegressor()
X_train, X_test, y_train, y_test = train_test_split(df, crop_yield_train)
classifier.fit(X_train, y_train)
classifier.score(X_test, y_test)
classifier.fit(df, crop_yield_train)
pred = classifier.predict(test_df)
random_forest_df = pd.DataFrame({"id": test_id, "yield": pred})
random_forest_df.to_csv("crop_yield_submission_random_forest.csv", index=False)
# # Lasso Regression
Lasso_clf = Lasso(2)
Lasso_clf.fit(X_train, y_train)
Lasso_clf.score(X_test, y_test)
| false | 0 | 972 | 0 | 972 | 972 |
||
129495798
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## This dataset includes movie rates which are published in Netflix between 2006-1016 and this notebook is created for Nurullah Cildag's first assignment in his data science course.
# ### This notebook is created together with @MertUrper in one of his study group working sessions.
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv("/kaggle/input/imdb-movie-data-2006-2016/IMDB-Movie-Data.csv")
df.head()
# let's make the index column the title of the movie
df.set_index("Title", inplace=True)
df.head(3)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/495/129495798.ipynb
| null | null |
[{"Id": 129495798, "ScriptId": 38505328, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5266661, "CreationDate": "05/14/2023 09:46:56", "VersionNumber": 1.0, "Title": "notebook92045c6e59", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 40.0, "LinesInsertedFromPrevious": 40.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## This dataset includes movie rates which are published in Netflix between 2006-1016 and this notebook is created for Nurullah Cildag's first assignment in his data science course.
# ### This notebook is created together with @MertUrper in one of his study group working sessions.
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv("/kaggle/input/imdb-movie-data-2006-2016/IMDB-Movie-Data.csv")
df.head()
# let's make the index column the title of the movie
df.set_index("Title", inplace=True)
df.head(3)
| false | 0 | 367 | 0 | 367 | 367 |
||
129495012
|
<jupyter_start><jupyter_text>Bank Customer Churn
RowNumber—corresponds to the record (row) number and has no effect on the output.
CustomerId—contains random values and has no effect on customer leaving the bank.
Surname—the surname of a customer has no impact on their decision to leave the bank.
CreditScore—can have an effect on customer churn, since a customer with a higher credit score is less likely to leave the bank.
Geography—a customer’s location can affect their decision to leave the bank.
Gender—it’s interesting to explore whether gender plays a role in a customer leaving the bank.
Age—this is certainly relevant, since older customers are less likely to leave their bank than younger ones.
Tenure—refers to the number of years that the customer has been a client of the bank. Normally, older clients are more loyal and less likely to leave a bank.
Balance—also a very good indicator of customer churn, as people with a higher balance in their accounts are less likely to leave the bank compared to those with lower balances.
NumOfProducts—refers to the number of products that a customer has purchased through the bank.
HasCrCard—denotes whether or not a customer has a credit card. This column is also relevant, since people with a credit card are less likely to leave the bank.
IsActiveMember—active customers are less likely to leave the bank.
EstimatedSalary—as with balance, people with lower salaries are more likely to leave the bank compared to those with higher salaries.
Exited—whether or not the customer left the bank.
Complain—customer has complaint or not.
Satisfaction Score—Score provided by the customer for their complaint resolution.
Card Type—type of card hold by the customer.
Points Earned—the points earned by the customer for using credit card.
Acknowledgements
As we know, it is much more expensive to sign in a new client than keeping an existing one.
It is advantageous for banks to know what leads a client towards the decision to leave the company.
Churn prevention allows companies to develop loyalty programs and retention campaigns to keep as many customers as possible.
Kaggle dataset identifier: bank-customer-churn
<jupyter_script>import warnings
warnings.filterwarnings("ignore")
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
cp = sns.color_palette("pastel")
plt.style.use(plt.style.available)
# plt.style.use('darkgrid')
# plt.style.available
data = pd.read_csv("/kaggle/input/bank-customer-churn/Customer-Churn-Records.csv")
data
feat = [
"Geography",
"Gender",
"NumOfProducts",
"HasCrCard",
"IsActiveMember",
"Exited",
"Complain",
"Satisfaction Score",
"Card Type",
]
fig, axes = plt.subplots(3, 3, figsize=(15, 13), sharex=False, sharey=False)
for i, feature in enumerate(feat):
row = i // 3
column = i % 3
ax = axes[row, column]
data.groupby(feature).count()["RowNumber"].sort_values(ascending=False).plot(
kind="bar", ax=ax, color=cp[1:]
)
ax.set_title(feature, backgroundcolor="skyblue", font="Arial", fontsize=14)
ax.tick_params(axis="x", labelrotation=0)
labels = data.groupby(feature).count()["RowNumber"].sort_values(ascending=False)
ax.bar_label(ax.containers[0], labels=labels, label_type="edge")
plt.tight_layout()
plt.show()
feat = ["CreditScore", "Age", "Balance"]
fig, axes = plt.subplots(2, 2, figsize=(15, 7), sharex=False, sharey=False)
for i, feature in enumerate(feat):
row = i // 2
column = i % 2
ax = axes[row, column]
sns.histplot(data, x=feature, ax=ax, kde=True, palette="pastel")
ax.tick_params(axis="x", labelrotation=45)
ax.set_title(feature, backgroundcolor="skyblue", font="Arial", fontsize=14)
plt.tight_layout()
plt.show()
sns.boxplot(data=data, x="Balance")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/495/129495012.ipynb
|
bank-customer-churn
|
radheshyamkollipara
|
[{"Id": 129495012, "ScriptId": 38503843, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9201197, "CreationDate": "05/14/2023 09:38:22", "VersionNumber": 1.0, "Title": "Subplots_Customer_churn", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 49.0, "LinesInsertedFromPrevious": 49.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185609731, "KernelVersionId": 129495012, "SourceDatasetVersionId": 5550559}]
|
[{"Id": 5550559, "DatasetId": 3197960, "DatasourceVersionId": 5625285, "CreatorUserId": 14862076, "LicenseName": "Other (specified in description)", "CreationDate": "04/28/2023 16:32:01", "VersionNumber": 1.0, "Title": "Bank Customer Churn", "Slug": "bank-customer-churn", "Subtitle": "Bank Customer Data for Customer Churn", "Description": "RowNumber\u2014corresponds to the record (row) number and has no effect on the output.\nCustomerId\u2014contains random values and has no effect on customer leaving the bank.\nSurname\u2014the surname of a customer has no impact on their decision to leave the bank.\nCreditScore\u2014can have an effect on customer churn, since a customer with a higher credit score is less likely to leave the bank.\nGeography\u2014a customer\u2019s location can affect their decision to leave the bank.\nGender\u2014it\u2019s interesting to explore whether gender plays a role in a customer leaving the bank.\nAge\u2014this is certainly relevant, since older customers are less likely to leave their bank than younger ones.\nTenure\u2014refers to the number of years that the customer has been a client of the bank. Normally, older clients are more loyal and less likely to leave a bank.\nBalance\u2014also a very good indicator of customer churn, as people with a higher balance in their accounts are less likely to leave the bank compared to those with lower balances.\nNumOfProducts\u2014refers to the number of products that a customer has purchased through the bank.\nHasCrCard\u2014denotes whether or not a customer has a credit card. This column is also relevant, since people with a credit card are less likely to leave the bank.\nIsActiveMember\u2014active customers are less likely to leave the bank.\nEstimatedSalary\u2014as with balance, people with lower salaries are more likely to leave the bank compared to those with higher salaries.\nExited\u2014whether or not the customer left the bank.\nComplain\u2014customer has complaint or not.\nSatisfaction Score\u2014Score provided by the customer for their complaint resolution.\nCard Type\u2014type of card hold by the customer.\nPoints Earned\u2014the points earned by the customer for using credit card.\n\nAcknowledgements\n\nAs we know, it is much more expensive to sign in a new client than keeping an existing one.\n\nIt is advantageous for banks to know what leads a client towards the decision to leave the company.\n\nChurn prevention allows companies to develop loyalty programs and retention campaigns to keep as many customers as possible.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3197960, "CreatorUserId": 14862076, "OwnerUserId": 14862076.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5550559.0, "CurrentDatasourceVersionId": 5625285.0, "ForumId": 3262570, "Type": 2, "CreationDate": "04/28/2023 16:32:01", "LastActivityDate": "04/28/2023", "TotalViews": 39315, "TotalDownloads": 6814, "TotalVotes": 97, "TotalKernels": 52}]
|
[{"Id": 14862076, "UserName": "radheshyamkollipara", "DisplayName": "Radheshyam Kollipara", "RegisterDate": "04/28/2023", "PerformanceTier": 0}]
|
import warnings
warnings.filterwarnings("ignore")
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
cp = sns.color_palette("pastel")
plt.style.use(plt.style.available)
# plt.style.use('darkgrid')
# plt.style.available
data = pd.read_csv("/kaggle/input/bank-customer-churn/Customer-Churn-Records.csv")
data
feat = [
"Geography",
"Gender",
"NumOfProducts",
"HasCrCard",
"IsActiveMember",
"Exited",
"Complain",
"Satisfaction Score",
"Card Type",
]
fig, axes = plt.subplots(3, 3, figsize=(15, 13), sharex=False, sharey=False)
for i, feature in enumerate(feat):
row = i // 3
column = i % 3
ax = axes[row, column]
data.groupby(feature).count()["RowNumber"].sort_values(ascending=False).plot(
kind="bar", ax=ax, color=cp[1:]
)
ax.set_title(feature, backgroundcolor="skyblue", font="Arial", fontsize=14)
ax.tick_params(axis="x", labelrotation=0)
labels = data.groupby(feature).count()["RowNumber"].sort_values(ascending=False)
ax.bar_label(ax.containers[0], labels=labels, label_type="edge")
plt.tight_layout()
plt.show()
feat = ["CreditScore", "Age", "Balance"]
fig, axes = plt.subplots(2, 2, figsize=(15, 7), sharex=False, sharey=False)
for i, feature in enumerate(feat):
row = i // 2
column = i % 2
ax = axes[row, column]
sns.histplot(data, x=feature, ax=ax, kde=True, palette="pastel")
ax.tick_params(axis="x", labelrotation=45)
ax.set_title(feature, backgroundcolor="skyblue", font="Arial", fontsize=14)
plt.tight_layout()
plt.show()
sns.boxplot(data=data, x="Balance")
| false | 1 | 561 | 0 | 1,062 | 561 |
||
129501058
|
<jupyter_start><jupyter_text>Analyzing Screen Time
This dataset contains the usage statistics of various apps on a phone.
Inspiration:
- Do the number of notifications and the number of times the user opens an app have a correlation?
- Does usage have a correlation with the number of notifications?
Kaggle dataset identifier: analyzing-screen-time
<jupyter_script># # **Screen Time Analysis**
# Screen Time Analysis is the task of analyzing and creating a report on which applications and websites are used by the user for how much time. Apple devices have one of the best ways of creating a screen time report.
# **Screen Time Analysis on iPhone:**
# 
# For the task of screen time analysis, I found an ideal dataset that contains data about:
#
# * Date
# * Usage of Applications
# * Number of Notifications from Applications
# * Number of times apps opened
# # **Screen Time Analysis using Python**
# Let’s start the task of screen time analysis by importing the necessary Python libraries and the dataset:
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
data = pd.read_csv("/kaggle/input/analyzing-screen-time/Screentime - App Details.csv")
print(data.head())
# Now let’s have a look if the dataset has any null values or not:
data.isnull().sum()
# The dataset doesn’t have any null values. Now let’s have a look at the descriptive statistics of the data:
print(data.describe())
# Now let’s start with analyzing the screen time of the user. I will first look at the amount of usage of the apps:
figure = px.bar(data_frame=data, x="Date", y="Usage", color="App", title="Usage")
figure.show()
# Now let’s have a look at the number of notifications from the apps:
figure = px.bar(
data_frame=data, x="Date", y="Notifications", color="App", title="Notifications"
)
figure.show()
# Now let’s have a look at the number of times the apps opened:
figure = px.bar(
data_frame=data, x="Date", y="Times opened", color="App", title="Times Opened"
)
figure.show()
# We generally use our smartphones when we get notified by any app. So let’s have a look at the relationship between the number of notifications and the amount of usage:
figure = px.scatter(
data_frame=data,
x="Notifications",
y="Usage",
size="Notifications",
trendline="ols",
title="Relationship Between Number of Notifications and Usage",
)
figure.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/501/129501058.ipynb
|
analyzing-screen-time
|
ruchi798
|
[{"Id": 129501058, "ScriptId": 38506417, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11302066, "CreationDate": "05/14/2023 10:42:00", "VersionNumber": 1.0, "Title": "Screen Time Analysis using Python", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 70.0, "LinesInsertedFromPrevious": 70.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185621811, "KernelVersionId": 129501058, "SourceDatasetVersionId": 4238944}]
|
[{"Id": 4238944, "DatasetId": 2498258, "DatasourceVersionId": 4296262, "CreatorUserId": 3309826, "LicenseName": "CC0: Public Domain", "CreationDate": "09/22/2022 23:58:02", "VersionNumber": 2.0, "Title": "Analyzing Screen Time", "Slug": "analyzing-screen-time", "Subtitle": "Cumulative and individual Screen Time of apps", "Description": "This dataset contains the usage statistics of various apps on a phone.\n\nInspiration:\n- Do the number of notifications and the number of times the user opens an app have a correlation?\n- Does usage have a correlation with the number of notifications?", "VersionNotes": "Data Update 2022/09/22", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2498258, "CreatorUserId": 3309826, "OwnerUserId": 3309826.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 4238944.0, "CurrentDatasourceVersionId": 4296262.0, "ForumId": 2526436, "Type": 2, "CreationDate": "09/22/2022 23:56:40", "LastActivityDate": "09/22/2022", "TotalViews": 13806, "TotalDownloads": 1886, "TotalVotes": 61, "TotalKernels": 11}]
|
[{"Id": 3309826, "UserName": "ruchi798", "DisplayName": "Ruchi Bhatia", "RegisterDate": "06/04/2019", "PerformanceTier": 4}]
|
# # **Screen Time Analysis**
# Screen Time Analysis is the task of analyzing and creating a report on which applications and websites are used by the user for how much time. Apple devices have one of the best ways of creating a screen time report.
# **Screen Time Analysis on iPhone:**
# 
# For the task of screen time analysis, I found an ideal dataset that contains data about:
#
# * Date
# * Usage of Applications
# * Number of Notifications from Applications
# * Number of times apps opened
# # **Screen Time Analysis using Python**
# Let’s start the task of screen time analysis by importing the necessary Python libraries and the dataset:
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
data = pd.read_csv("/kaggle/input/analyzing-screen-time/Screentime - App Details.csv")
print(data.head())
# Now let’s have a look if the dataset has any null values or not:
data.isnull().sum()
# The dataset doesn’t have any null values. Now let’s have a look at the descriptive statistics of the data:
print(data.describe())
# Now let’s start with analyzing the screen time of the user. I will first look at the amount of usage of the apps:
figure = px.bar(data_frame=data, x="Date", y="Usage", color="App", title="Usage")
figure.show()
# Now let’s have a look at the number of notifications from the apps:
figure = px.bar(
data_frame=data, x="Date", y="Notifications", color="App", title="Notifications"
)
figure.show()
# Now let’s have a look at the number of times the apps opened:
figure = px.bar(
data_frame=data, x="Date", y="Times opened", color="App", title="Times Opened"
)
figure.show()
# We generally use our smartphones when we get notified by any app. So let’s have a look at the relationship between the number of notifications and the amount of usage:
figure = px.scatter(
data_frame=data,
x="Notifications",
y="Usage",
size="Notifications",
trendline="ols",
title="Relationship Between Number of Notifications and Usage",
)
figure.show()
| false | 1 | 593 | 0 | 670 | 593 |
||
129501781
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # ICR - Identifying Age-Related Conditions
# ## Use Machine Learning to detect conditions with measurements of anonymous characteristics
# ## Context
# They say age is just a number but a whole host of health issues come with aging. From heart disease and dementia to hearing loss and arthritis, aging is a risk factor for numerous diseases and complications. The growing field of bioinformatics includes research into interventions that can help slow and reverse biological aging and prevent major age-related ailments. Data science could have a role to play in developing new methods to solve problems with diverse data, even if the number of samples is small.
# Currently, models like XGBoost and random forest are used to predict medical conditions yet the models' performance is not good enough. Dealing with critical problems where lives are on the line, models need to make correct predictions reliably and consistently between different cases.
# Founded in 2015, competition host InVitro Cell Research, LLC (ICR) is a privately funded company focused on regenerative and preventive personalized medicine. Their offices and labs in the greater New York City area offer state-of-the-art research space. InVitro Cell Research's Scientists are what set them apart, helping guide and defining their mission of researching how to repair aging people fast.
# In this competition, you’ll work with measurements of health characteristic data to solve critical problems in bioinformatics. Based on minimal training, you’ll create a model to predict if a person has any of three medical conditions, with an aim to improve on existing methods.
# You could help advance the growing field of bioinformatics and explore new methods to solve complex problems with diverse data.
# ### Importing necessary libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
# ## 1. Data Understanding and inspection of missing and incompatible values
# Loading training dataset
df_train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
# Loading greeks dataset
df_greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
# Inspecting the Training dataset
df_train.info()
# Inspecting the Greeks dataset
df_greeks.info()
# Also let us merge the two datasets to form a final master dataset containing all the necessary details
df = pd.merge(df_train, df_greeks, on="Id")
# Inspecting the data
df.info()
# We have therefore, columns Id,EJ and the greek columns with categorical values and the rest being numerical
# Therefore, Class is our target variable. Also, it seems there are a few missing values in the training and master dataset. Let us handle these values
# Checking the percentage of missing value by columns
missing_values = df_train.isnull().mean() * 100
missing_values[missing_values > 0]
# Checking the percentage of missing value by columns in the master data set too
missing_values_2 = df.isnull().mean() * 100
missing_values_2[missing_values_2 > 0]
# Thus, these are the columns that have missing values. Among them, BQ and EL are the highest with almost 9.7% values missing
# # 2. EDA and Data correction
# ### Handling missing data
# Let us first check the distribution of the columns with null values
# Printing missing columns
missing_cols = missing_values_2[missing_values_2 > 0].index.to_list()
missing_cols
# Printing the distribution for BQ and EL multiple missing value columns
df[missing_cols].hist(bins=100)
plt.show()
# For this, we can use KNN imputer for the following reasons.
# Some Advantages of KNN
# 1. Quick calculation time
#
# 2. Simple algorithm – to interpret
#
# 3. Versatile – useful for regression and classification
#
# 4. High accuracy – you do not need to compare with better-supervised learning models
#
# 5. No assumptions about data – no need to make additional assumptions, tune several parameters, or build a model. This makes it crucial in nonlinear data case.
#
from sklearn.impute import KNNImputer
imputer01 = KNNImputer(n_neighbors=3)
tr_data_01 = imputer01.fit_transform(df[missing_cols])
df[missing_cols] = tr_data_01
df_train[missing_cols] = tr_data_01
# Checking the null value distribution now
# Checking the percentage of missing value by columns
missing_values = df_train.isnull().mean() * 100
missing_values[missing_values > 0]
# Checking the percentage of missing value by columns
missing_values = df.isnull().mean() * 100
missing_values[missing_values > 0]
# ### Checking the distribution of the target variable
df["Class"].hist()
# Plotting a pie chart to understand this better
data = df["Class"].value_counts()
fig = px.pie(data, values=data, names=data.index)
fig.show()
# Therefore, the dataset is highly imbalanced as the classes 1 and 0 are 17.5% and 82.5% of the dataset respectively
# We will be handling this imbalance a little later
# ### Univariate Analysis
# We begin with setting up a column list
# Setting up column list
target_col = ["Class"]
greek_cols = list(df_greeks.columns)
id_col = ["Id"]
cat_cols = ["EJ"]
num_cols = [
col for col in df.columns if col not in greek_cols + cat_cols + target_col + id_col
]
print(greek_cols + cat_cols + target_col + id_col)
# Checking for categorical columns
df[cat_cols[0]].hist()
# Plotting a pie chart to understand this better
data = df[cat_cols[0]].value_counts()
fig = px.pie(data, values=data, names=data.index)
fig.show()
# Therefore, the values A and B can be mapped to 1 and 0
# Transforming EJ by mapping A and B to 1 and 0 respectively
df[cat_cols[0]] = df[cat_cols[0]].map({"A": 1, "B": 0})
# Plotting a pie chart to understand this better
data = df[cat_cols[0]].value_counts()
fig = px.pie(data, values=data, names=data.index)
fig.show()
# Checking the distribution of numeric columns
# Setting max column width
pd.set_option("display.max_columns", 500)
# Printing the description of numerical columns
df.describe()
# Going forward we would have to scale the numeric columns during model building
# Plotting the distribution of numerical columns
for i, col in enumerate(num_cols):
plt.figure(i)
# sns.boxplot(x=df[col])
sns.histplot(df, x=col, kde=True)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/501/129501781.ipynb
| null | null |
[{"Id": 129501781, "ScriptId": 38468484, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 2145674, "CreationDate": "05/14/2023 10:49:39", "VersionNumber": 1.0, "Title": "notebooke1f3660014", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 188.0, "LinesInsertedFromPrevious": 188.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # ICR - Identifying Age-Related Conditions
# ## Use Machine Learning to detect conditions with measurements of anonymous characteristics
# ## Context
# They say age is just a number but a whole host of health issues come with aging. From heart disease and dementia to hearing loss and arthritis, aging is a risk factor for numerous diseases and complications. The growing field of bioinformatics includes research into interventions that can help slow and reverse biological aging and prevent major age-related ailments. Data science could have a role to play in developing new methods to solve problems with diverse data, even if the number of samples is small.
# Currently, models like XGBoost and random forest are used to predict medical conditions yet the models' performance is not good enough. Dealing with critical problems where lives are on the line, models need to make correct predictions reliably and consistently between different cases.
# Founded in 2015, competition host InVitro Cell Research, LLC (ICR) is a privately funded company focused on regenerative and preventive personalized medicine. Their offices and labs in the greater New York City area offer state-of-the-art research space. InVitro Cell Research's Scientists are what set them apart, helping guide and defining their mission of researching how to repair aging people fast.
# In this competition, you’ll work with measurements of health characteristic data to solve critical problems in bioinformatics. Based on minimal training, you’ll create a model to predict if a person has any of three medical conditions, with an aim to improve on existing methods.
# You could help advance the growing field of bioinformatics and explore new methods to solve complex problems with diverse data.
# ### Importing necessary libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
# ## 1. Data Understanding and inspection of missing and incompatible values
# Loading training dataset
df_train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
# Loading greeks dataset
df_greeks = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/greeks.csv")
# Inspecting the Training dataset
df_train.info()
# Inspecting the Greeks dataset
df_greeks.info()
# Also let us merge the two datasets to form a final master dataset containing all the necessary details
df = pd.merge(df_train, df_greeks, on="Id")
# Inspecting the data
df.info()
# We have therefore, columns Id,EJ and the greek columns with categorical values and the rest being numerical
# Therefore, Class is our target variable. Also, it seems there are a few missing values in the training and master dataset. Let us handle these values
# Checking the percentage of missing value by columns
missing_values = df_train.isnull().mean() * 100
missing_values[missing_values > 0]
# Checking the percentage of missing value by columns in the master data set too
missing_values_2 = df.isnull().mean() * 100
missing_values_2[missing_values_2 > 0]
# Thus, these are the columns that have missing values. Among them, BQ and EL are the highest with almost 9.7% values missing
# # 2. EDA and Data correction
# ### Handling missing data
# Let us first check the distribution of the columns with null values
# Printing missing columns
missing_cols = missing_values_2[missing_values_2 > 0].index.to_list()
missing_cols
# Printing the distribution for BQ and EL multiple missing value columns
df[missing_cols].hist(bins=100)
plt.show()
# For this, we can use KNN imputer for the following reasons.
# Some Advantages of KNN
# 1. Quick calculation time
#
# 2. Simple algorithm – to interpret
#
# 3. Versatile – useful for regression and classification
#
# 4. High accuracy – you do not need to compare with better-supervised learning models
#
# 5. No assumptions about data – no need to make additional assumptions, tune several parameters, or build a model. This makes it crucial in nonlinear data case.
#
from sklearn.impute import KNNImputer
imputer01 = KNNImputer(n_neighbors=3)
tr_data_01 = imputer01.fit_transform(df[missing_cols])
df[missing_cols] = tr_data_01
df_train[missing_cols] = tr_data_01
# Checking the null value distribution now
# Checking the percentage of missing value by columns
missing_values = df_train.isnull().mean() * 100
missing_values[missing_values > 0]
# Checking the percentage of missing value by columns
missing_values = df.isnull().mean() * 100
missing_values[missing_values > 0]
# ### Checking the distribution of the target variable
df["Class"].hist()
# Plotting a pie chart to understand this better
data = df["Class"].value_counts()
fig = px.pie(data, values=data, names=data.index)
fig.show()
# Therefore, the dataset is highly imbalanced as the classes 1 and 0 are 17.5% and 82.5% of the dataset respectively
# We will be handling this imbalance a little later
# ### Univariate Analysis
# We begin with setting up a column list
# Setting up column list
target_col = ["Class"]
greek_cols = list(df_greeks.columns)
id_col = ["Id"]
cat_cols = ["EJ"]
num_cols = [
col for col in df.columns if col not in greek_cols + cat_cols + target_col + id_col
]
print(greek_cols + cat_cols + target_col + id_col)
# Checking for categorical columns
df[cat_cols[0]].hist()
# Plotting a pie chart to understand this better
data = df[cat_cols[0]].value_counts()
fig = px.pie(data, values=data, names=data.index)
fig.show()
# Therefore, the values A and B can be mapped to 1 and 0
# Transforming EJ by mapping A and B to 1 and 0 respectively
df[cat_cols[0]] = df[cat_cols[0]].map({"A": 1, "B": 0})
# Plotting a pie chart to understand this better
data = df[cat_cols[0]].value_counts()
fig = px.pie(data, values=data, names=data.index)
fig.show()
# Checking the distribution of numeric columns
# Setting max column width
pd.set_option("display.max_columns", 500)
# Printing the description of numerical columns
df.describe()
# Going forward we would have to scale the numeric columns during model building
# Plotting the distribution of numerical columns
for i, col in enumerate(num_cols):
plt.figure(i)
# sns.boxplot(x=df[col])
sns.histplot(df, x=col, kde=True)
| false | 0 | 1,903 | 0 | 1,903 | 1,903 |
||
129821434
|
<jupyter_start><jupyter_text>moviereviews
Kaggle dataset identifier: moviereviews
<jupyter_script># ## Perform imports and load the dataset
# The dataset contains the text of 2000 movie reviews. 1000 are positive, 1000 are negative, and the text has been preprocessed as a tab-delimited file.
import numpy as np
import pandas as pd
df = pd.read_csv("/kaggle/input/moviereviews/moviereviews.tsv", sep="\t")
df.head()
len(df)
# ### Take a look at a typical review. This one is labeled "negative":
from IPython.display import Markdown, display
display(Markdown("> " + df["review"][0]))
# ## Check for missing values:
# We have intentionally included records with missing data. Some have NaN values, others have short strings composed of only spaces. This might happen if a reviewer declined to provide a comment with their review. We will show two ways using pandas to identify and remove records containing empty data.
# * NaN records are efficiently handled with [.isnull()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html) and [.dropna()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html)
# * Strings that contain only whitespace can be handled with [.isspace()](https://docs.python.org/3/library/stdtypes.html#str.isspace), [.itertuples()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.itertuples.html), and [.drop()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html)
# ### Detect & remove NaN values:
# Check for the existence of NaN values in a cell:
df.isnull().sum()
# 35 records show **NaN** (this stands for "not a number" and is equivalent to *None*). These are easily removed using the `.dropna()` pandas function.
# CAUTION: By setting inplace=True, we permanently affect the DataFrame currently in memory, and this can't be undone. However, it does *not* affect the original source data. If we needed to, we could always load the original DataFrame from scratch.
df.dropna(inplace=True)
len(df)
# ### Detect & remove empty strings
# Technically, we're dealing with "whitespace only" strings. If the original .tsv file had contained empty strings, pandas **.read_csv()** would have assigned NaN values to those cells by default.
# In order to detect these strings we need to iterate over each row in the DataFrame. The **.itertuples()** pandas method is a good tool for this as it provides access to every field. For brevity we'll assign the names `i`, `lb` and `rv` to the `index`, `label` and `review` columns.
blanks = [] # start with an empty list
for i, lb, rv in df.itertuples(): # iterate over the DataFrame
if type(rv) == str: # avoid NaN values
if rv.isspace(): # test 'review' for whitespace
blanks.append(i) # add matching index numbers to the list
print(len(blanks), "blanks: ", blanks)
# Next we'll pass our list of index numbers to the **.drop()** method, and set `inplace=True` to make the change permanent.
df.drop(blanks, inplace=True)
len(df)
# Great! We dropped 62 records from the original 2000. Let's continue with the analysis.
# ## Take a quick look at the `label` column:
df["label"].value_counts()
# ## Split the data into train & test sets:
from sklearn.model_selection import train_test_split
X = df["review"]
y = df["label"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42
)
# ## Build pipelines to vectorize the data, then train and fit a model
# Now that we have sets to train and test, we'll develop a selection of pipelines, each with a different model.
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
# Naïve Bayes:
text_clf_nb = Pipeline(
[
("tfidf", TfidfVectorizer()),
("clf", MultinomialNB()),
]
)
# Linear SVC:
text_clf_lsvc = Pipeline(
[
("tfidf", TfidfVectorizer()),
("clf", LinearSVC()),
]
)
# ## Feed the training data through the first pipeline
# We'll run naïve Bayes first
text_clf_nb.fit(X_train, y_train)
# ## Run predictions and analyze the results (naïve Bayes)
# Form a prediction set
predictions = text_clf_nb.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test, predictions))
# Print a classification report
print(metrics.classification_report(y_test, predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test, predictions))
# Naïve Bayes gave us better-than-average results at 76.4% for classifying reviews as positive or negative based on text alone. Let's see if we can do better.
# ## Feed the training data through the second pipeline
# Next we'll run Linear SVC
text_clf_lsvc.fit(X_train, y_train)
# ## Run predictions and analyze the results (Linear SVC)
# Form a prediction set
predictions = text_clf_lsvc.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test, predictions))
# Print a classification report
print(metrics.classification_report(y_test, predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test, predictions))
# Not bad! Based on text alone we correctly classified reviews as positive or negative **84.7%** of the time. In an upcoming section we'll try to improve this score even further by performing *sentiment analysis* on the reviews.
# ## Advanced Topic - Adding Stopwords to CountVectorizer
# By default, **CountVectorizer** and **TfidfVectorizer** do *not* filter stopwords. However, they offer some optional settings, including passing in your own stopword list.
# CAUTION: There are some [known issues](http://aclweb.org/anthology/W18-2502) using Scikit-learn's built-in stopwords list. Some words that are filtered may in fact aid in classification. In this section we'll pass in our own stopword list, so that we know exactly what's being filtered.
# The [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) class accepts the following arguments:
# > *CountVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, **stop_words=None**, token_pattern='(?u)\b\w\w+\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=)*
# [TfidVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) supports the same arguments and more. Under *stop_words* we have the following options:
# > stop_words : *string {'english'}, list, or None (default)*
# That is, we can run `TfidVectorizer(stop_words='english')` to accept scikit-learn's built-in list,
# or `TfidVectorizer(stop_words=[a, and, the])` to filter these three words. In practice we would assign our list to a variable and pass that in instead.
# Scikit-learn's built-in list contains 318 stopwords:
# > from sklearn.feature_extraction import text
# > print(text.ENGLISH_STOP_WORDS)
# ['a', 'about', 'above', 'across', 'after', 'afterwards', 'again', 'against', 'all', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', 'am', 'among', 'amongst', 'amoungst', 'amount', 'an', 'and', 'another', 'any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere', 'are', 'around', 'as', 'at', 'back', 'be', 'became', 'because', 'become', 'becomes', 'becoming', 'been', 'before', 'beforehand', 'behind', 'being', 'below', 'beside', 'besides', 'between', 'beyond', 'bill', 'both', 'bottom', 'but', 'by', 'call', 'can', 'cannot', 'cant', 'co', 'con', 'could', 'couldnt', 'cry', 'de', 'describe', 'detail', 'do', 'done', 'down', 'due', 'during', 'each', 'eg', 'eight', 'either', 'eleven', 'else', 'elsewhere', 'empty', 'enough', 'etc', 'even', 'ever', 'every', 'everyone', 'everything', 'everywhere', 'except', 'few', 'fifteen', 'fifty', 'fill', 'find', 'fire', 'first', 'five', 'for', 'former', 'formerly', 'forty', 'found', 'four', 'from', 'front', 'full', 'further', 'get', 'give', 'go', 'had', 'has', 'hasnt', 'have', 'he', 'hence', 'her', 'here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers', 'herself', 'him', 'himself', 'his', 'how', 'however', 'hundred', 'i', 'ie', 'if', 'in', 'inc', 'indeed', 'interest', 'into', 'is', 'it', 'its', 'itself', 'keep', 'last', 'latter', 'latterly', 'least', 'less', 'ltd', 'made', 'many', 'may', 'me', 'meanwhile', 'might', 'mill', 'mine', 'more', 'moreover', 'most', 'mostly', 'move', 'much', 'must', 'my', 'myself', 'name', 'namely', 'neither', 'never', 'nevertheless', 'next', 'nine', 'no', 'nobody', 'none', 'noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of', 'off', 'often', 'on', 'once', 'one', 'only', 'onto', 'or', 'other', 'others', 'otherwise', 'our', 'ours', 'ourselves', 'out', 'over', 'own', 'part', 'per', 'perhaps', 'please', 'put', 'rather', 're', 'same', 'see', 'seem', 'seemed', 'seeming', 'seems', 'serious', 'several', 'she', 'should', 'show', 'side', 'since', 'sincere', 'six', 'sixty', 'so', 'some', 'somehow', 'someone', 'something', 'sometime', 'sometimes', 'somewhere', 'still', 'such', 'system', 'take', 'ten', 'than', 'that', 'the', 'their', 'them', 'themselves', 'then', 'thence', 'there', 'thereafter', 'thereby', 'therefore', 'therein', 'thereupon', 'these', 'they', 'thick', 'thin', 'third', 'this', 'those', 'though', 'three', 'through', 'throughout', 'thru', 'thus', 'to', 'together', 'too', 'top', 'toward', 'towards', 'twelve', 'twenty', 'two', 'un', 'under', 'until', 'up', 'upon', 'us', 'very', 'via', 'was', 'we', 'well', 'were', 'what', 'whatever', 'when', 'whence', 'whenever', 'where', 'whereafter', 'whereas', 'whereby', 'wherein', 'whereupon', 'wherever', 'whether', 'which', 'while', 'whither', 'who', 'whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with', 'within', 'without', 'would', 'yet', 'you', 'your', 'yours', 'yourself', 'yourselves']
# However, there are words in this list that may influence a classification of movie reviews. With this in mind, let's trim the list to just 60 words:
stopwords = [
"a",
"about",
"an",
"and",
"are",
"as",
"at",
"be",
"been",
"but",
"by",
"can",
"even",
"ever",
"for",
"from",
"get",
"had",
"has",
"have",
"he",
"her",
"hers",
"his",
"how",
"i",
"if",
"in",
"into",
"is",
"it",
"its",
"just",
"me",
"my",
"of",
"on",
"or",
"see",
"seen",
"she",
"so",
"than",
"that",
"the",
"their",
"there",
"they",
"this",
"to",
"was",
"we",
"were",
"what",
"when",
"which",
"who",
"will",
"with",
"you",
]
# Now let's repeat the process above and see if the removal of stopwords improves or impairs our score.
# RUN THIS CELL TO ADD STOPWORDS TO THE LINEAR SVC PIPELINE:
text_clf_lsvc2 = Pipeline(
[
("tfidf", TfidfVectorizer(stop_words=stopwords)),
("clf", LinearSVC()),
]
)
text_clf_lsvc2.fit(X_train, y_train)
predictions = text_clf_lsvc2.predict(X_test)
print(metrics.confusion_matrix(y_test, predictions))
print(metrics.classification_report(y_test, predictions))
print(metrics.accuracy_score(y_test, predictions))
# Our score didn't change that much. We went from 84.7% without filtering stopwords to 84.4% after adding a stopword filter to our pipeline. Keep in mind that 2000 movie reviews is a relatively small dataset. The real gain from stripping stopwords is improved processing speed; depending on the size of the corpus, it might save hours.
# ## Feed new data into a trained model
# Once we've developed a fairly accurate model, it's time to feed new data through it. In this last section we'll write our own review, and see how accurately our model assigns a "positive" or "negative" label to it.
# ### First, train the model
# ### Next, feed new data to the model's `predict()` method
myreview = "below average"
print(
text_clf_nb.predict([myreview])
) # be sure to put "myreview" inside square brackets
print(text_clf_lsvc.predict([myreview]))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/821/129821434.ipynb
|
moviereviews
|
alawdisoft
|
[{"Id": 129821434, "ScriptId": 38609948, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13408885, "CreationDate": "05/16/2023 17:58:34", "VersionNumber": 1.0, "Title": "movie_review_pipeline_posneg", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 185.0, "LinesInsertedFromPrevious": 185.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186198542, "KernelVersionId": 129821434, "SourceDatasetVersionId": 4741983}]
|
[{"Id": 4741983, "DatasetId": 2744123, "DatasourceVersionId": 4805001, "CreatorUserId": 12594195, "LicenseName": "Unknown", "CreationDate": "12/19/2022 01:51:28", "VersionNumber": 1.0, "Title": "moviereviews", "Slug": "moviereviews", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2744123, "CreatorUserId": 12594195, "OwnerUserId": 12594195.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 4741983.0, "CurrentDatasourceVersionId": 4805001.0, "ForumId": 2777619, "Type": 2, "CreationDate": "12/19/2022 01:51:28", "LastActivityDate": "12/19/2022", "TotalViews": 79, "TotalDownloads": 1, "TotalVotes": 1, "TotalKernels": 1}]
|
[{"Id": 12594195, "UserName": "alawdisoft", "DisplayName": "Ala'a Abdu Saleh Alawdi", "RegisterDate": "11/24/2022", "PerformanceTier": 2}]
|
# ## Perform imports and load the dataset
# The dataset contains the text of 2000 movie reviews. 1000 are positive, 1000 are negative, and the text has been preprocessed as a tab-delimited file.
import numpy as np
import pandas as pd
df = pd.read_csv("/kaggle/input/moviereviews/moviereviews.tsv", sep="\t")
df.head()
len(df)
# ### Take a look at a typical review. This one is labeled "negative":
from IPython.display import Markdown, display
display(Markdown("> " + df["review"][0]))
# ## Check for missing values:
# We have intentionally included records with missing data. Some have NaN values, others have short strings composed of only spaces. This might happen if a reviewer declined to provide a comment with their review. We will show two ways using pandas to identify and remove records containing empty data.
# * NaN records are efficiently handled with [.isnull()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html) and [.dropna()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html)
# * Strings that contain only whitespace can be handled with [.isspace()](https://docs.python.org/3/library/stdtypes.html#str.isspace), [.itertuples()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.itertuples.html), and [.drop()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html)
# ### Detect & remove NaN values:
# Check for the existence of NaN values in a cell:
df.isnull().sum()
# 35 records show **NaN** (this stands for "not a number" and is equivalent to *None*). These are easily removed using the `.dropna()` pandas function.
# CAUTION: By setting inplace=True, we permanently affect the DataFrame currently in memory, and this can't be undone. However, it does *not* affect the original source data. If we needed to, we could always load the original DataFrame from scratch.
df.dropna(inplace=True)
len(df)
# ### Detect & remove empty strings
# Technically, we're dealing with "whitespace only" strings. If the original .tsv file had contained empty strings, pandas **.read_csv()** would have assigned NaN values to those cells by default.
# In order to detect these strings we need to iterate over each row in the DataFrame. The **.itertuples()** pandas method is a good tool for this as it provides access to every field. For brevity we'll assign the names `i`, `lb` and `rv` to the `index`, `label` and `review` columns.
blanks = [] # start with an empty list
for i, lb, rv in df.itertuples(): # iterate over the DataFrame
if type(rv) == str: # avoid NaN values
if rv.isspace(): # test 'review' for whitespace
blanks.append(i) # add matching index numbers to the list
print(len(blanks), "blanks: ", blanks)
# Next we'll pass our list of index numbers to the **.drop()** method, and set `inplace=True` to make the change permanent.
df.drop(blanks, inplace=True)
len(df)
# Great! We dropped 62 records from the original 2000. Let's continue with the analysis.
# ## Take a quick look at the `label` column:
df["label"].value_counts()
# ## Split the data into train & test sets:
from sklearn.model_selection import train_test_split
X = df["review"]
y = df["label"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42
)
# ## Build pipelines to vectorize the data, then train and fit a model
# Now that we have sets to train and test, we'll develop a selection of pipelines, each with a different model.
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
# Naïve Bayes:
text_clf_nb = Pipeline(
[
("tfidf", TfidfVectorizer()),
("clf", MultinomialNB()),
]
)
# Linear SVC:
text_clf_lsvc = Pipeline(
[
("tfidf", TfidfVectorizer()),
("clf", LinearSVC()),
]
)
# ## Feed the training data through the first pipeline
# We'll run naïve Bayes first
text_clf_nb.fit(X_train, y_train)
# ## Run predictions and analyze the results (naïve Bayes)
# Form a prediction set
predictions = text_clf_nb.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test, predictions))
# Print a classification report
print(metrics.classification_report(y_test, predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test, predictions))
# Naïve Bayes gave us better-than-average results at 76.4% for classifying reviews as positive or negative based on text alone. Let's see if we can do better.
# ## Feed the training data through the second pipeline
# Next we'll run Linear SVC
text_clf_lsvc.fit(X_train, y_train)
# ## Run predictions and analyze the results (Linear SVC)
# Form a prediction set
predictions = text_clf_lsvc.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test, predictions))
# Print a classification report
print(metrics.classification_report(y_test, predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test, predictions))
# Not bad! Based on text alone we correctly classified reviews as positive or negative **84.7%** of the time. In an upcoming section we'll try to improve this score even further by performing *sentiment analysis* on the reviews.
# ## Advanced Topic - Adding Stopwords to CountVectorizer
# By default, **CountVectorizer** and **TfidfVectorizer** do *not* filter stopwords. However, they offer some optional settings, including passing in your own stopword list.
# CAUTION: There are some [known issues](http://aclweb.org/anthology/W18-2502) using Scikit-learn's built-in stopwords list. Some words that are filtered may in fact aid in classification. In this section we'll pass in our own stopword list, so that we know exactly what's being filtered.
# The [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) class accepts the following arguments:
# > *CountVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, **stop_words=None**, token_pattern='(?u)\b\w\w+\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=)*
# [TfidVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) supports the same arguments and more. Under *stop_words* we have the following options:
# > stop_words : *string {'english'}, list, or None (default)*
# That is, we can run `TfidVectorizer(stop_words='english')` to accept scikit-learn's built-in list,
# or `TfidVectorizer(stop_words=[a, and, the])` to filter these three words. In practice we would assign our list to a variable and pass that in instead.
# Scikit-learn's built-in list contains 318 stopwords:
# > from sklearn.feature_extraction import text
# > print(text.ENGLISH_STOP_WORDS)
# ['a', 'about', 'above', 'across', 'after', 'afterwards', 'again', 'against', 'all', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', 'am', 'among', 'amongst', 'amoungst', 'amount', 'an', 'and', 'another', 'any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere', 'are', 'around', 'as', 'at', 'back', 'be', 'became', 'because', 'become', 'becomes', 'becoming', 'been', 'before', 'beforehand', 'behind', 'being', 'below', 'beside', 'besides', 'between', 'beyond', 'bill', 'both', 'bottom', 'but', 'by', 'call', 'can', 'cannot', 'cant', 'co', 'con', 'could', 'couldnt', 'cry', 'de', 'describe', 'detail', 'do', 'done', 'down', 'due', 'during', 'each', 'eg', 'eight', 'either', 'eleven', 'else', 'elsewhere', 'empty', 'enough', 'etc', 'even', 'ever', 'every', 'everyone', 'everything', 'everywhere', 'except', 'few', 'fifteen', 'fifty', 'fill', 'find', 'fire', 'first', 'five', 'for', 'former', 'formerly', 'forty', 'found', 'four', 'from', 'front', 'full', 'further', 'get', 'give', 'go', 'had', 'has', 'hasnt', 'have', 'he', 'hence', 'her', 'here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers', 'herself', 'him', 'himself', 'his', 'how', 'however', 'hundred', 'i', 'ie', 'if', 'in', 'inc', 'indeed', 'interest', 'into', 'is', 'it', 'its', 'itself', 'keep', 'last', 'latter', 'latterly', 'least', 'less', 'ltd', 'made', 'many', 'may', 'me', 'meanwhile', 'might', 'mill', 'mine', 'more', 'moreover', 'most', 'mostly', 'move', 'much', 'must', 'my', 'myself', 'name', 'namely', 'neither', 'never', 'nevertheless', 'next', 'nine', 'no', 'nobody', 'none', 'noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of', 'off', 'often', 'on', 'once', 'one', 'only', 'onto', 'or', 'other', 'others', 'otherwise', 'our', 'ours', 'ourselves', 'out', 'over', 'own', 'part', 'per', 'perhaps', 'please', 'put', 'rather', 're', 'same', 'see', 'seem', 'seemed', 'seeming', 'seems', 'serious', 'several', 'she', 'should', 'show', 'side', 'since', 'sincere', 'six', 'sixty', 'so', 'some', 'somehow', 'someone', 'something', 'sometime', 'sometimes', 'somewhere', 'still', 'such', 'system', 'take', 'ten', 'than', 'that', 'the', 'their', 'them', 'themselves', 'then', 'thence', 'there', 'thereafter', 'thereby', 'therefore', 'therein', 'thereupon', 'these', 'they', 'thick', 'thin', 'third', 'this', 'those', 'though', 'three', 'through', 'throughout', 'thru', 'thus', 'to', 'together', 'too', 'top', 'toward', 'towards', 'twelve', 'twenty', 'two', 'un', 'under', 'until', 'up', 'upon', 'us', 'very', 'via', 'was', 'we', 'well', 'were', 'what', 'whatever', 'when', 'whence', 'whenever', 'where', 'whereafter', 'whereas', 'whereby', 'wherein', 'whereupon', 'wherever', 'whether', 'which', 'while', 'whither', 'who', 'whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with', 'within', 'without', 'would', 'yet', 'you', 'your', 'yours', 'yourself', 'yourselves']
# However, there are words in this list that may influence a classification of movie reviews. With this in mind, let's trim the list to just 60 words:
stopwords = [
"a",
"about",
"an",
"and",
"are",
"as",
"at",
"be",
"been",
"but",
"by",
"can",
"even",
"ever",
"for",
"from",
"get",
"had",
"has",
"have",
"he",
"her",
"hers",
"his",
"how",
"i",
"if",
"in",
"into",
"is",
"it",
"its",
"just",
"me",
"my",
"of",
"on",
"or",
"see",
"seen",
"she",
"so",
"than",
"that",
"the",
"their",
"there",
"they",
"this",
"to",
"was",
"we",
"were",
"what",
"when",
"which",
"who",
"will",
"with",
"you",
]
# Now let's repeat the process above and see if the removal of stopwords improves or impairs our score.
# RUN THIS CELL TO ADD STOPWORDS TO THE LINEAR SVC PIPELINE:
text_clf_lsvc2 = Pipeline(
[
("tfidf", TfidfVectorizer(stop_words=stopwords)),
("clf", LinearSVC()),
]
)
text_clf_lsvc2.fit(X_train, y_train)
predictions = text_clf_lsvc2.predict(X_test)
print(metrics.confusion_matrix(y_test, predictions))
print(metrics.classification_report(y_test, predictions))
print(metrics.accuracy_score(y_test, predictions))
# Our score didn't change that much. We went from 84.7% without filtering stopwords to 84.4% after adding a stopword filter to our pipeline. Keep in mind that 2000 movie reviews is a relatively small dataset. The real gain from stripping stopwords is improved processing speed; depending on the size of the corpus, it might save hours.
# ## Feed new data into a trained model
# Once we've developed a fairly accurate model, it's time to feed new data through it. In this last section we'll write our own review, and see how accurately our model assigns a "positive" or "negative" label to it.
# ### First, train the model
# ### Next, feed new data to the model's `predict()` method
myreview = "below average"
print(
text_clf_nb.predict([myreview])
) # be sure to put "myreview" inside square brackets
print(text_clf_lsvc.predict([myreview]))
| false | 0 | 3,733 | 0 | 3,752 | 3,733 |
||
129821893
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for the graphs
import seaborn as sns
plt.style.use("ggplot")
import nltk
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## Read In Data
# Read in data in a data frame
df = pd.read_csv("../input/movie-reviews-dataset/ALL_AUDIENCE_REVIEWS.csv")
df.head()
df["reviewContent"].values[0]
print(df.shape) # 1100 rows, 7 columns
df = df.head(550)
df.head()
# ## Quick Exploratory Data Analysis (EDA)
ax = (
df["reviewRating"]
.value_counts()
.sort_index()
.plot(kind="bar", title="Count of Reviews by Ratings", figsize=(10, 5))
)
ax.set_xlabel("Review Ratings")
plt.show()
# ## Basic NLTK
example = df["reviewContent"][50]
print(example)
tokens = nltk.word_tokenize(example)
tokens[:10]
tagged = nltk.pos_tag(tokens)
tagged[:10]
entities = nltk.chunk.ne_chunk(tagged)
entities.pprint()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/821/129821893.ipynb
| null | null |
[{"Id": 129821893, "ScriptId": 38604764, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15133337, "CreationDate": "05/16/2023 18:03:09", "VersionNumber": 1.0, "Title": "Sentiment Analysis CSS2", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 72.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for the graphs
import seaborn as sns
plt.style.use("ggplot")
import nltk
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## Read In Data
# Read in data in a data frame
df = pd.read_csv("../input/movie-reviews-dataset/ALL_AUDIENCE_REVIEWS.csv")
df.head()
df["reviewContent"].values[0]
print(df.shape) # 1100 rows, 7 columns
df = df.head(550)
df.head()
# ## Quick Exploratory Data Analysis (EDA)
ax = (
df["reviewRating"]
.value_counts()
.sort_index()
.plot(kind="bar", title="Count of Reviews by Ratings", figsize=(10, 5))
)
ax.set_xlabel("Review Ratings")
plt.show()
# ## Basic NLTK
example = df["reviewContent"][50]
print(example)
tokens = nltk.word_tokenize(example)
tokens[:10]
tagged = nltk.pos_tag(tokens)
tagged[:10]
entities = nltk.chunk.ne_chunk(tagged)
entities.pprint()
| false | 0 | 462 | 0 | 462 | 462 |
||
129821114
|
<jupyter_start><jupyter_text>Predicting Critical Heat Flux
### Context
This dataset was prepared for the journal article entitled "On the prediction of critical heat flux using a physics-informed machine learning-aided framework" (doi: 10.1016/j.applthermaleng.2019.114540). The dataset contains processed and compiled records of experimental critical heat flux and boundary conditions used for the work presented in the article.
Kaggle dataset identifier: predicting-heat-flux
<jupyter_script># # Imports
try:
from fancyimpute import IterativeSVD
from fancyimpute import KNN
print("Library is already installed.")
except ImportError:
print("Library is not installed. Proceed with installation.")
from fancyimpute import IterativeSVD
from fancyimpute import KNN
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
import xgboost as xgb
import lightgbm as lgb
from catboost import CatBoostRegressor
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
import optuna
# ### Path management
base: str
if os.getcwd() == "/kaggle/working":
base = "/kaggle"
else:
base = os.path.join(os.getcwd())
def get_full_dir(sub_dir: str) -> str:
return os.path.join(base, sub_dir)
# # EDA
df_sample_submission: pd.DataFrame = pd.read_csv(
get_full_dir("input/playground-series-s3e15/sample_submission.csv")
)
df_data: pd.DataFrame = pd.read_csv(
get_full_dir("input/playground-series-s3e15/data.csv"), index_col="id"
)
df_og: pd.DataFrame = pd.read_csv(
get_full_dir("input/predicting-heat-flux/Data_CHF_Zhao_2020_ATE.csv"),
index_col="id",
)
df_data.isna().sum()
df_og.isna().sum()
# ##### Our training data contains lots of missing values, we could impute them using a very simple strategy like mean or median however this will likely result is poor model quality due to the about of missing value. Instead, we can also predict what value the missing value should have based off the other non-null value in these columns. The original data could be very he puff for that purpose since it does not contain any missing values.
df_sample_submission.head()
# ##### As describe in the completions we are prediction the missing values for x_e_out, our test data consist of all the row with missing x_e_out.
df_data.head()
df_og.head()
fig, axes = plt.subplots(nrows=len(df_data.columns), ncols=4, figsize=(26, 50))
axes = axes.flatten()
def graph_numerical_feature(
data: list[tuple[pd.DataFrame, str, str]], target: str, axes_start_i: int
) -> None:
# Plot densities
for df, column, label in data:
sns.kdeplot(df[column], label=label, ax=axes[axes_start_i], fill=False)
for df, column, label in data:
sns.histplot(
df[column], label=label, ax=axes[axes_start_i + 1], stat="density", bins=50
)
# Plot boxplot
tmp_data_dict = {}
for df, column, label in data:
tmp_data_dict[label] = df[column]
df_tmp = pd.DataFrame(tmp_data_dict)
sns.boxplot(data=df_tmp, ax=axes[axes_start_i + 2])
axes[axes_start_i + 2].set_xlabel(col)
# Plot target correlation
for df, column, label in data:
sns.scatterplot(
x=column, y=target, label=label, ax=axes[axes_start_i + 3], data=df
)
# Plot legends
axes[axes_start_i].legend()
axes[axes_start_i + 1].legend()
axes[axes_start_i + 3].legend()
def graph_categorical_feature(
data: list[tuple[pd.DataFrame, str, str]], target: str, axes_start_i: int
) -> None:
# Makes sure that the categories are shown in the same order
category_order: list[str] = data[0][0][data[0][1]].unique()
# Plot barplots
for il, data_pack in enumerate(data):
df, column, label = data_pack
sns.countplot(
x=column,
data=df,
label=label,
order=category_order,
ax=axes[axes_start_i + il],
)
axes[axes_start_i + il].tick_params(
axis="x", rotation=90
) # Rotate x-axis labels
# Plot target correlation
for il, data_pack in enumerate(data):
df, column, label = data_pack
sns.barplot(
x=column,
y=target,
data=df,
label=label,
order=category_order,
ax=axes[axes_start_i + 2 + il],
)
axes[axes_start_i + 2 + il].tick_params(
axis="x", rotation=90
) # Rotate x-axis labels
# Plot legends
axes[axes_start_i].legend()
axes[axes_start_i + 1].legend()
axes[axes_start_i + 2].legend()
axes[axes_start_i + 3].legend()
i = 0
for col in df_data.columns:
if pd.api.types.is_numeric_dtype(df_data[col]):
graph_numerical_feature(
[(df_data, col, "given"), (df_og, col, "original")], "x_e_out [-]", i
)
else:
graph_categorical_feature(
[(df_data, col, "given"), (df_og, col, "original")], "x_e_out [-]", i
)
i += 4
plt.show()
# ##### The original data closely follows the distribution of our given synthetic data. This suggesting the value where nulled in our given data set evenly across all features, this means that original data should be good to use without introduction feature or distribution bias.
def show_feature_correlation(df: pd.DataFrame, title: str):
plt.figure(figsize=(20, 20))
corr_matrix = df.select_dtypes(include="number").corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix, dtype=bool)
mask[np.triu_indices_from(mask)] = True
sns.heatmap(corr_matrix, cmap="coolwarm", annot=True, mask=mask)
plt.title(title)
plt.show()
show_feature_correlation(df_data, "Given")
show_feature_correlation(df_og, "Original")
# # Data Prep
numerical_columns = [
"pressure [MPa]",
"mass_flux [kg/m2-s]",
"x_e_out [-]",
"D_e [mm]",
"D_h [mm]",
"length [mm]",
"chf_exp [MW/m2]",
]
numerical_features = [
"pressure [MPa]",
"mass_flux [kg/m2-s]",
"D_e [mm]",
"D_h [mm]",
"length [mm]",
"chf_exp [MW/m2]",
]
categorical_columns = ["author", "geometry"]
target = "x_e_out [-]"
label_encoders = {}
def label_encode(df: pd.DataFrame) -> None:
for column in categorical_columns:
label_encoder: LabelEncoder = LabelEncoder()
df[column] = label_encoder.fit_transform(df[column])
label_encoders[column] = label_encoder
def reverse_encode(df: pd.DataFrame) -> None:
for column in label_encoders.keys():
df[column] = df[column].astype(int)
df[column] = label_encoders[column].inverse_transform(df[column])
df_train: pd.DataFrame = pd.concat([df_data, df_og])
label_encode(df_train)
# # Train
# ## Baseline 0: Impute all missing numerical value including target using MICE
# Create an instance of imputer
imputer = IterativeSVD()
# imputer = KNN()
# Perform the imputation
df_train_imputed = pd.DataFrame(
imputer.fit_transform(df_train), columns=df_train.columns
)
# Print the imputed DataFrame
print("Imputed DataFrame:")
df_train_imputed
df_train_imputed.isna().sum()
# ## Baseline 1: Tree boosting on imputed data
# ### Construct new training data
for column in numerical_features:
if df_train[column].isna().sum() > 0:
df_train[f"{column}_was_an"] = df_train[column].isna().astype(int)
for column in numerical_features:
if df_train[column].isna().sum() > 0:
df_train[column] = df_train_imputed[column]
df_test = df_train[df_train[target].isna()]
df_train = df_train[~df_train[target].isna()]
import re
def remove_special_characters(column_name):
# Remove special characters using regular expressions
return re.sub(r"[^a-zA-Z0-9_]+", "", column_name)
def remove_special_characters_from_dataframe(df):
# Remove special characters from all column names in the DataFrame
df.columns = [remove_special_characters(col) for col in df.columns]
return df
df_test = remove_special_characters_from_dataframe(df_test)
df_train = remove_special_characters_from_dataframe(df_train)
import optuna
import lightgbm as lgb
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
def objective(trial):
# Define the hyperparameter search space
params = {
"objective": "regression",
"metric": "mae",
"boosting_type": "gbdt",
"num_leaves": trial.suggest_int("num_leaves", 10, 100),
"learning_rate": trial.suggest_float("learning_rate", 0.01, 0.1),
"feature_fraction": trial.suggest_float("feature_fraction", 0.1, 1.0),
"bagging_fraction": trial.suggest_float("bagging_fraction", 0.1, 1.0),
"bagging_freq": trial.suggest_int("bagging_freq", 1, 10),
"min_child_samples": trial.suggest_int("min_child_samples", 1, 20),
"lambda_l1": trial.suggest_float("lambda_l1", 0.01, 10.0),
"lambda_l2": trial.suggest_float("lambda_l2", 0.01, 10.0),
"verbosity": -1,
}
# Split the data into training and validation sets
X = df_train.drop("x_e_out", axis=1)
y = df_train["x_e_out"]
X_train, X_val, y_train, y_val = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Train the LGBM regressor
model = lgb.LGBMRegressor(**params)
model.fit(X_train, y_train)
# Predict on the validation set and calculate MAE
y_pred = model.predict(X_val)
mae = mean_absolute_error(y_val, y_pred)
return mae
# Create the Optuna study
study = optuna.create_study(direction="minimize")
# Start the hyperparameter search
study.optimize(objective, n_trials=200)
# Print the best parameters and the best MAE
best_params = study.best_params
best_mae = study.best_value
print(f"Best Parameters: {best_params}")
print(f"Best MAE: {best_mae}")
class Pipeline:
def __init__(self, model_type: str):
self.model_type = model_type
if model_type == "LightGBM":
self.model = lgb.LGBMRegressor(**study.best_params)
# elif model_type == 'CatBoost':
# self.model = CatBoostRegressor(**best_param[model_type])
# elif model_type == 'XGBoost':
# self.model = xgb.XGBRegressor(**best_param[model_type])
else:
raise ValueError(
f"Given model type is not supported! {model_type} was given."
)
def fit(self, X, y, X_val, y_val):
if self.model_type in [
"GradientBoostingRegressor",
"HuberRegressor",
"AdaBoostRegressor",
"RandomForestRegressor",
"ARDRegression",
"PLSRegression",
"ExtraTreesRegressor",
]:
self.model.fit(X, y.ravel())
else:
self.model.fit(
X, y.ravel(), eval_set=[(X_val, y_val.ravel())], verbose=False
)
def predict(self, X):
return self.model.predict(X)
def train(model_type):
X = df_train.drop(["x_e_out"], axis=1)
y = df_train["x_e_out"]
SKFs = KFold(n_splits=5, shuffle=True, random_state=1)
losses = []
pipelines = []
idx_vls = []
for fold, (idx_tr, idx_vl) in enumerate(SKFs.split(X, y)):
train_dataframe = df_train.iloc[idx_tr]
# train_dataframe = pd.concat([train_dataframe, df_og])
# train_dataframe.reset_index(drop=True, inplace=True)
dev_dataframe = df_train.iloc[idx_vl]
# splits data to features and target
X_train = train_dataframe.drop("x_e_out", axis=1)
y_train = train_dataframe["x_e_out"]
X_dev = dev_dataframe.drop("x_e_out", axis=1)
y_dev = dev_dataframe["x_e_out"]
# crates and fits a pipeline
pipelineMy = Pipeline(model_type)
pipelineMy.fit(X_train, y_train, X_dev, y_dev)
# evaluates the model
pipelines.append(pipelineMy)
loss = mean_absolute_error(y_dev, pipelineMy.predict(X_dev))
losses.append(loss)
idx_vls.append(idx_vl)
print(f"Fold {fold} loss: {loss}")
print(f"Mean loss: {np.array(losses).mean()}")
return losses, pipelines, idx_vls
losses, pipelines, eval_sets = train("LightGBM")
# # Make predictions
predictions = 0
df_test = df_test.drop("x_e_out", axis=1)
for pipeline in pipelines:
predictions += pipeline.predict(df_test)
predictions = predictions / float(len(pipelines))
df_test["x_e_out [-]"] = predictions
df_test["x_e_out [-]"].to_csv("submission.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/821/129821114.ipynb
|
predicting-heat-flux
|
saurabhshahane
|
[{"Id": 129821114, "ScriptId": 38599064, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13329284, "CreationDate": "05/16/2023 17:55:09", "VersionNumber": 2.0, "Title": "Feature Imputation on Heat Flux | EDA | Baseline", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 319.0, "LinesInsertedFromPrevious": 154.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 165.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 186198127, "KernelVersionId": 129821114, "SourceDatasetVersionId": 1921393}]
|
[{"Id": 1921393, "DatasetId": 1145869, "DatasourceVersionId": 1959907, "CreatorUserId": 2411256, "LicenseName": "Attribution 4.0 International (CC BY 4.0)", "CreationDate": "02/08/2021 11:44:07", "VersionNumber": 1.0, "Title": "Predicting Critical Heat Flux", "Slug": "predicting-heat-flux", "Subtitle": "prediction of critical heat flux using Machine Learning", "Description": "### Context\n\nThis dataset was prepared for the journal article entitled \"On the prediction of critical heat flux using a physics-informed machine learning-aided framework\" (doi: 10.1016/j.applthermaleng.2019.114540). The dataset contains processed and compiled records of experimental critical heat flux and boundary conditions used for the work presented in the article. \n\n### Acknowledgements\n\nZhao, Xingang (2020), \u201cData for: On the prediction of critical heat flux using a physics-informed machine learning-aided framework\u201d, Mendeley Data, V1, doi: 10.17632/5p5h37tyv7.1", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1145869, "CreatorUserId": 2411256, "OwnerUserId": 2411256.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1921393.0, "CurrentDatasourceVersionId": 1959907.0, "ForumId": 1163376, "Type": 2, "CreationDate": "02/08/2021 11:44:07", "LastActivityDate": "02/08/2021", "TotalViews": 6889, "TotalDownloads": 589, "TotalVotes": 42, "TotalKernels": 78}]
|
[{"Id": 2411256, "UserName": "saurabhshahane", "DisplayName": "Saurabh Shahane", "RegisterDate": "10/26/2018", "PerformanceTier": 4}]
|
# # Imports
try:
from fancyimpute import IterativeSVD
from fancyimpute import KNN
print("Library is already installed.")
except ImportError:
print("Library is not installed. Proceed with installation.")
from fancyimpute import IterativeSVD
from fancyimpute import KNN
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
import xgboost as xgb
import lightgbm as lgb
from catboost import CatBoostRegressor
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
import optuna
# ### Path management
base: str
if os.getcwd() == "/kaggle/working":
base = "/kaggle"
else:
base = os.path.join(os.getcwd())
def get_full_dir(sub_dir: str) -> str:
return os.path.join(base, sub_dir)
# # EDA
df_sample_submission: pd.DataFrame = pd.read_csv(
get_full_dir("input/playground-series-s3e15/sample_submission.csv")
)
df_data: pd.DataFrame = pd.read_csv(
get_full_dir("input/playground-series-s3e15/data.csv"), index_col="id"
)
df_og: pd.DataFrame = pd.read_csv(
get_full_dir("input/predicting-heat-flux/Data_CHF_Zhao_2020_ATE.csv"),
index_col="id",
)
df_data.isna().sum()
df_og.isna().sum()
# ##### Our training data contains lots of missing values, we could impute them using a very simple strategy like mean or median however this will likely result is poor model quality due to the about of missing value. Instead, we can also predict what value the missing value should have based off the other non-null value in these columns. The original data could be very he puff for that purpose since it does not contain any missing values.
df_sample_submission.head()
# ##### As describe in the completions we are prediction the missing values for x_e_out, our test data consist of all the row with missing x_e_out.
df_data.head()
df_og.head()
fig, axes = plt.subplots(nrows=len(df_data.columns), ncols=4, figsize=(26, 50))
axes = axes.flatten()
def graph_numerical_feature(
data: list[tuple[pd.DataFrame, str, str]], target: str, axes_start_i: int
) -> None:
# Plot densities
for df, column, label in data:
sns.kdeplot(df[column], label=label, ax=axes[axes_start_i], fill=False)
for df, column, label in data:
sns.histplot(
df[column], label=label, ax=axes[axes_start_i + 1], stat="density", bins=50
)
# Plot boxplot
tmp_data_dict = {}
for df, column, label in data:
tmp_data_dict[label] = df[column]
df_tmp = pd.DataFrame(tmp_data_dict)
sns.boxplot(data=df_tmp, ax=axes[axes_start_i + 2])
axes[axes_start_i + 2].set_xlabel(col)
# Plot target correlation
for df, column, label in data:
sns.scatterplot(
x=column, y=target, label=label, ax=axes[axes_start_i + 3], data=df
)
# Plot legends
axes[axes_start_i].legend()
axes[axes_start_i + 1].legend()
axes[axes_start_i + 3].legend()
def graph_categorical_feature(
data: list[tuple[pd.DataFrame, str, str]], target: str, axes_start_i: int
) -> None:
# Makes sure that the categories are shown in the same order
category_order: list[str] = data[0][0][data[0][1]].unique()
# Plot barplots
for il, data_pack in enumerate(data):
df, column, label = data_pack
sns.countplot(
x=column,
data=df,
label=label,
order=category_order,
ax=axes[axes_start_i + il],
)
axes[axes_start_i + il].tick_params(
axis="x", rotation=90
) # Rotate x-axis labels
# Plot target correlation
for il, data_pack in enumerate(data):
df, column, label = data_pack
sns.barplot(
x=column,
y=target,
data=df,
label=label,
order=category_order,
ax=axes[axes_start_i + 2 + il],
)
axes[axes_start_i + 2 + il].tick_params(
axis="x", rotation=90
) # Rotate x-axis labels
# Plot legends
axes[axes_start_i].legend()
axes[axes_start_i + 1].legend()
axes[axes_start_i + 2].legend()
axes[axes_start_i + 3].legend()
i = 0
for col in df_data.columns:
if pd.api.types.is_numeric_dtype(df_data[col]):
graph_numerical_feature(
[(df_data, col, "given"), (df_og, col, "original")], "x_e_out [-]", i
)
else:
graph_categorical_feature(
[(df_data, col, "given"), (df_og, col, "original")], "x_e_out [-]", i
)
i += 4
plt.show()
# ##### The original data closely follows the distribution of our given synthetic data. This suggesting the value where nulled in our given data set evenly across all features, this means that original data should be good to use without introduction feature or distribution bias.
def show_feature_correlation(df: pd.DataFrame, title: str):
plt.figure(figsize=(20, 20))
corr_matrix = df.select_dtypes(include="number").corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix, dtype=bool)
mask[np.triu_indices_from(mask)] = True
sns.heatmap(corr_matrix, cmap="coolwarm", annot=True, mask=mask)
plt.title(title)
plt.show()
show_feature_correlation(df_data, "Given")
show_feature_correlation(df_og, "Original")
# # Data Prep
numerical_columns = [
"pressure [MPa]",
"mass_flux [kg/m2-s]",
"x_e_out [-]",
"D_e [mm]",
"D_h [mm]",
"length [mm]",
"chf_exp [MW/m2]",
]
numerical_features = [
"pressure [MPa]",
"mass_flux [kg/m2-s]",
"D_e [mm]",
"D_h [mm]",
"length [mm]",
"chf_exp [MW/m2]",
]
categorical_columns = ["author", "geometry"]
target = "x_e_out [-]"
label_encoders = {}
def label_encode(df: pd.DataFrame) -> None:
for column in categorical_columns:
label_encoder: LabelEncoder = LabelEncoder()
df[column] = label_encoder.fit_transform(df[column])
label_encoders[column] = label_encoder
def reverse_encode(df: pd.DataFrame) -> None:
for column in label_encoders.keys():
df[column] = df[column].astype(int)
df[column] = label_encoders[column].inverse_transform(df[column])
df_train: pd.DataFrame = pd.concat([df_data, df_og])
label_encode(df_train)
# # Train
# ## Baseline 0: Impute all missing numerical value including target using MICE
# Create an instance of imputer
imputer = IterativeSVD()
# imputer = KNN()
# Perform the imputation
df_train_imputed = pd.DataFrame(
imputer.fit_transform(df_train), columns=df_train.columns
)
# Print the imputed DataFrame
print("Imputed DataFrame:")
df_train_imputed
df_train_imputed.isna().sum()
# ## Baseline 1: Tree boosting on imputed data
# ### Construct new training data
for column in numerical_features:
if df_train[column].isna().sum() > 0:
df_train[f"{column}_was_an"] = df_train[column].isna().astype(int)
for column in numerical_features:
if df_train[column].isna().sum() > 0:
df_train[column] = df_train_imputed[column]
df_test = df_train[df_train[target].isna()]
df_train = df_train[~df_train[target].isna()]
import re
def remove_special_characters(column_name):
# Remove special characters using regular expressions
return re.sub(r"[^a-zA-Z0-9_]+", "", column_name)
def remove_special_characters_from_dataframe(df):
# Remove special characters from all column names in the DataFrame
df.columns = [remove_special_characters(col) for col in df.columns]
return df
df_test = remove_special_characters_from_dataframe(df_test)
df_train = remove_special_characters_from_dataframe(df_train)
import optuna
import lightgbm as lgb
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
def objective(trial):
# Define the hyperparameter search space
params = {
"objective": "regression",
"metric": "mae",
"boosting_type": "gbdt",
"num_leaves": trial.suggest_int("num_leaves", 10, 100),
"learning_rate": trial.suggest_float("learning_rate", 0.01, 0.1),
"feature_fraction": trial.suggest_float("feature_fraction", 0.1, 1.0),
"bagging_fraction": trial.suggest_float("bagging_fraction", 0.1, 1.0),
"bagging_freq": trial.suggest_int("bagging_freq", 1, 10),
"min_child_samples": trial.suggest_int("min_child_samples", 1, 20),
"lambda_l1": trial.suggest_float("lambda_l1", 0.01, 10.0),
"lambda_l2": trial.suggest_float("lambda_l2", 0.01, 10.0),
"verbosity": -1,
}
# Split the data into training and validation sets
X = df_train.drop("x_e_out", axis=1)
y = df_train["x_e_out"]
X_train, X_val, y_train, y_val = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Train the LGBM regressor
model = lgb.LGBMRegressor(**params)
model.fit(X_train, y_train)
# Predict on the validation set and calculate MAE
y_pred = model.predict(X_val)
mae = mean_absolute_error(y_val, y_pred)
return mae
# Create the Optuna study
study = optuna.create_study(direction="minimize")
# Start the hyperparameter search
study.optimize(objective, n_trials=200)
# Print the best parameters and the best MAE
best_params = study.best_params
best_mae = study.best_value
print(f"Best Parameters: {best_params}")
print(f"Best MAE: {best_mae}")
class Pipeline:
def __init__(self, model_type: str):
self.model_type = model_type
if model_type == "LightGBM":
self.model = lgb.LGBMRegressor(**study.best_params)
# elif model_type == 'CatBoost':
# self.model = CatBoostRegressor(**best_param[model_type])
# elif model_type == 'XGBoost':
# self.model = xgb.XGBRegressor(**best_param[model_type])
else:
raise ValueError(
f"Given model type is not supported! {model_type} was given."
)
def fit(self, X, y, X_val, y_val):
if self.model_type in [
"GradientBoostingRegressor",
"HuberRegressor",
"AdaBoostRegressor",
"RandomForestRegressor",
"ARDRegression",
"PLSRegression",
"ExtraTreesRegressor",
]:
self.model.fit(X, y.ravel())
else:
self.model.fit(
X, y.ravel(), eval_set=[(X_val, y_val.ravel())], verbose=False
)
def predict(self, X):
return self.model.predict(X)
def train(model_type):
X = df_train.drop(["x_e_out"], axis=1)
y = df_train["x_e_out"]
SKFs = KFold(n_splits=5, shuffle=True, random_state=1)
losses = []
pipelines = []
idx_vls = []
for fold, (idx_tr, idx_vl) in enumerate(SKFs.split(X, y)):
train_dataframe = df_train.iloc[idx_tr]
# train_dataframe = pd.concat([train_dataframe, df_og])
# train_dataframe.reset_index(drop=True, inplace=True)
dev_dataframe = df_train.iloc[idx_vl]
# splits data to features and target
X_train = train_dataframe.drop("x_e_out", axis=1)
y_train = train_dataframe["x_e_out"]
X_dev = dev_dataframe.drop("x_e_out", axis=1)
y_dev = dev_dataframe["x_e_out"]
# crates and fits a pipeline
pipelineMy = Pipeline(model_type)
pipelineMy.fit(X_train, y_train, X_dev, y_dev)
# evaluates the model
pipelines.append(pipelineMy)
loss = mean_absolute_error(y_dev, pipelineMy.predict(X_dev))
losses.append(loss)
idx_vls.append(idx_vl)
print(f"Fold {fold} loss: {loss}")
print(f"Mean loss: {np.array(losses).mean()}")
return losses, pipelines, idx_vls
losses, pipelines, eval_sets = train("LightGBM")
# # Make predictions
predictions = 0
df_test = df_test.drop("x_e_out", axis=1)
for pipeline in pipelines:
predictions += pipeline.predict(df_test)
predictions = predictions / float(len(pipelines))
df_test["x_e_out [-]"] = predictions
df_test["x_e_out [-]"].to_csv("submission.csv")
| false | 0 | 3,720 | 2 | 3,836 | 3,720 |
||
129856838
|
import pandas as pd
data = [
["SUNNY", "HOT", "HIGH", False, "NO"],
["SUNNY", "HOT", "HIGH", True, "NO"],
["CLOUDY", "HOT", "HIGH", False, "YES"],
["RAINY", "MILD", "HIGH", False, "YES"],
["RAINY", "COOL", "NORMAL", False, "YES"],
["RAINY", "COOL", "NORMAL", True, "YES"],
["CLOUDY", "COOL", "NORMAL", True, "YES"],
["SUNNY", "MILD", "HIGH", False, "NO"],
["SUNNY", "COOL", "NORMAL", False, "YES"],
["RAINY", "MILD", "NORMAL", False, "YES"],
["SUNNY", "MILD", "NORMAL", True, "YES"],
["CLOUDY", "MILD", "HIGH", True, "YES"],
["CLOUDY", "HOT", "NORMAL", False, "YES"],
["RAINY", "MILD", "HIGH", True, "NO"],
]
# Create a pandas DataFrame
columns = ["Outlook", "Temperature", "Humidity", "Windy", "Play"]
df = pd.DataFrame(data, columns=columns)
df.head(5)
class Question:
"""A Question is used to partition a dataset.
This class just records a 'column number' (e.g., 0 for Color) and a
'column value' (e.g., Green). The 'match' method is used to compare
the feature value in an example to the feature value stored in the
question. See the demo below.
"""
def __init__(self, column, value):
self.column = column
self.value = value
def match(self, df, index):
# Compare the feature value in an example to the
# feature value in this question.
val = df.iloc[index, self.column]
if pd.api.types.is_numeric_dtype(val):
return val >= self.value
else:
return val == self.value
def __repr__(self):
# This is just a helper method to print
# the question in a readable format.
condition = "=="
if pd.api.types.is_numeric_dtype(self.value):
condition = ">="
return "Is %s %s %s?" % (df.columns[self.column], condition, str(self.value))
Question(4, True)
q = Question(0, "SUNNY") # Column Outlook = SUNNY
q.match(df, 2) # Matching values pada df ROW 2 apakah = SUNNY
def class_counts(df):
"""Counts the number of each type of example in a DataFrame."""
counts = {} # a dictionary of label -> count.
for index, row in df.iterrows():
# in our dataset format, the label is always the last column
label = row.iloc[-1]
if label not in counts:
counts[label] = 0
counts[label] += 1
return counts
def partition(df, question):
true_rows, false_rows = [], []
for index, row in df.iterrows():
if question.match(df, index):
true_rows.append(row)
else:
false_rows.append(row)
true_df = pd.DataFrame(true_rows, columns=df.columns)
false_df = pd.DataFrame(false_rows, columns=df.columns)
return true_df, false_df
true_rows, false_rows = partition(df, Question(0, "CLOUDY"))
# This will contain all the 'Red' rows.
true_rows
def gini(df):
"""Calculate the Gini Impurity for a DataFrame."""
counts = class_counts(df)
impurity = 1
total_rows = len(df)
for lbl in counts:
prob_of_lbl = counts[lbl] / total_rows
impurity -= prob_of_lbl**2
return impurity
no_mixing = [["SUNNY"], ["SUNNY"]]
test_noMIX = pd.DataFrame(no_mixing, columns=["Fruit"])
# this will return 0
gini(test_noMIX)
def info_gain(left, right, current_uncertainty):
"""Information Gain.
The uncertainty of the starting node, minus the weighted impurity of
two child nodes.
"""
p = float(len(left)) / (len(left) + len(right))
return current_uncertainty - p * gini(left) - (1 - p) * gini(right)
current_uncertainty = gini(df)
current_uncertainty
true_rows
true_rows, false_rows = partition(df, Question(0, "SUNNY"))
info_gain(true_rows, false_rows, current_uncertainty)
invoke = ["SUNNY", "CLOUDY", "RAINY"]
for i in invoke:
true_rows, false_rows = partition(df, Question(0, i))
print(i, ":", info_gain(true_rows, false_rows, current_uncertainty))
true_rows
def find_best_split(df):
"""Find the best question to ask by iterating over every feature / value
and calculating the information gain."""
best_gain = 0
best_question = None
current_uncertainty = gini(df)
n_features = len(df.columns) - 1
for col in range(n_features):
values = df.iloc[:, col].unique()
for val in values:
question = Question(col, val)
true_rows, false_rows = partition(df, question)
if len(true_rows) == 0 or len(false_rows) == 0:
continue
gain = info_gain(true_rows, false_rows, current_uncertainty)
if gain >= best_gain:
best_gain, best_question = gain, question
return best_gain, best_question
best_gain, best_question = find_best_split(df)
best_question
class Leaf:
"""A Leaf node classifies data.
This holds a dictionary of class (e.g., "Apple") -> number of times
it appears in the rows from the training data that reach this leaf.
"""
def __init__(self, df):
self.predictions = class_counts(df)
class Decision_Node:
"""A Decision Node asks a question.
This holds a reference to the question (column name), and the corresponding values that lead to the true branch and false branch.
"""
def __init__(self, question, true_values, false_values):
self.question = question
self.true_values = true_values
self.false_values = false_values
def partition(self, dataframe):
"""Partition the dataframe based on the question."""
true_data = dataframe[dataframe[self.question].isin(self.true_values)]
false_data = dataframe[dataframe[self.question].isin(self.false_values)]
return true_data, false_data
def build_tree(df):
# Try partitioning the dataset on each of the unique attribute,
# calculate the information gain,
# and return the question that produces the highest gain.
gain, question = find_best_split(df)
# Base case: no further info gain
# Since we can ask no further questions,
# we'll return a leaf.
if gain == 0:
return Leaf(df)
# If we reach here, we have found a useful feature / value
# to partition on.
true_rows, false_rows = partition(df, question)
# Recursively build the true branch.
true_branch = build_tree(true_rows)
# Recursively build the false branch.
false_branch = build_tree(false_rows)
# Return a Question node.
# This records the best feature / value to ask at this point,
# as well as the branches to follow
# depending on the answer.
return Decision_Node(question, true_branch, false_branch)
build_tree(df)
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import LabelEncoder
import pandas as pd
# Define the dataset
data = [
["SUNNY", "HOT", "HIGH", False, "NO"],
["SUNNY", "HOT", "HIGH", True, "NO"],
["CLOUDY", "HOT", "HIGH", False, "YES"],
["RAINY", "MILD", "HIGH", False, "YES"],
["RAINY", "COOL", "NORMAL", False, "YES"],
["RAINY", "COOL", "NORMAL", True, "YES"],
["CLOUDY", "COOL", "NORMAL", True, "YES"],
["SUNNY", "MILD", "HIGH", False, "NO"],
["SUNNY", "COOL", "NORMAL", False, "YES"],
["RAINY", "MILD", "NORMAL", False, "YES"],
["SUNNY", "MILD", "NORMAL", True, "YES"],
["CLOUDY", "MILD", "HIGH", True, "YES"],
["CLOUDY", "HOT", "NORMAL", False, "YES"],
["RAINY", "MILD", "HIGH", True, "NO"],
]
# Convert the dataset to a pandas DataFrame
df = pd.DataFrame(
data, columns=["Outlook", "Temperature", "Humidity", "Windy", "Label"]
)
# Encode categorical features
label_encoder = LabelEncoder()
for feature in df.columns[:-1]:
df[feature] = label_encoder.fit_transform(df[feature])
# Separate the features and labels
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
# Create the decision tree classifier
clf = DecisionTreeClassifier()
# Train the decision tree classifier
clf.fit(X, y)
# Make predictions
new_data = [[2, 1, 0, 0]]
predicted_label = clf.predict(new_data)
print("Predicted label:", predicted_label)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/856/129856838.ipynb
| null | null |
[{"Id": 129856838, "ScriptId": 38596800, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8500408, "CreationDate": "05/17/2023 02:22:15", "VersionNumber": 1.0, "Title": "DecisionTree", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 275.0, "LinesInsertedFromPrevious": 275.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
data = [
["SUNNY", "HOT", "HIGH", False, "NO"],
["SUNNY", "HOT", "HIGH", True, "NO"],
["CLOUDY", "HOT", "HIGH", False, "YES"],
["RAINY", "MILD", "HIGH", False, "YES"],
["RAINY", "COOL", "NORMAL", False, "YES"],
["RAINY", "COOL", "NORMAL", True, "YES"],
["CLOUDY", "COOL", "NORMAL", True, "YES"],
["SUNNY", "MILD", "HIGH", False, "NO"],
["SUNNY", "COOL", "NORMAL", False, "YES"],
["RAINY", "MILD", "NORMAL", False, "YES"],
["SUNNY", "MILD", "NORMAL", True, "YES"],
["CLOUDY", "MILD", "HIGH", True, "YES"],
["CLOUDY", "HOT", "NORMAL", False, "YES"],
["RAINY", "MILD", "HIGH", True, "NO"],
]
# Create a pandas DataFrame
columns = ["Outlook", "Temperature", "Humidity", "Windy", "Play"]
df = pd.DataFrame(data, columns=columns)
df.head(5)
class Question:
"""A Question is used to partition a dataset.
This class just records a 'column number' (e.g., 0 for Color) and a
'column value' (e.g., Green). The 'match' method is used to compare
the feature value in an example to the feature value stored in the
question. See the demo below.
"""
def __init__(self, column, value):
self.column = column
self.value = value
def match(self, df, index):
# Compare the feature value in an example to the
# feature value in this question.
val = df.iloc[index, self.column]
if pd.api.types.is_numeric_dtype(val):
return val >= self.value
else:
return val == self.value
def __repr__(self):
# This is just a helper method to print
# the question in a readable format.
condition = "=="
if pd.api.types.is_numeric_dtype(self.value):
condition = ">="
return "Is %s %s %s?" % (df.columns[self.column], condition, str(self.value))
Question(4, True)
q = Question(0, "SUNNY") # Column Outlook = SUNNY
q.match(df, 2) # Matching values pada df ROW 2 apakah = SUNNY
def class_counts(df):
"""Counts the number of each type of example in a DataFrame."""
counts = {} # a dictionary of label -> count.
for index, row in df.iterrows():
# in our dataset format, the label is always the last column
label = row.iloc[-1]
if label not in counts:
counts[label] = 0
counts[label] += 1
return counts
def partition(df, question):
true_rows, false_rows = [], []
for index, row in df.iterrows():
if question.match(df, index):
true_rows.append(row)
else:
false_rows.append(row)
true_df = pd.DataFrame(true_rows, columns=df.columns)
false_df = pd.DataFrame(false_rows, columns=df.columns)
return true_df, false_df
true_rows, false_rows = partition(df, Question(0, "CLOUDY"))
# This will contain all the 'Red' rows.
true_rows
def gini(df):
"""Calculate the Gini Impurity for a DataFrame."""
counts = class_counts(df)
impurity = 1
total_rows = len(df)
for lbl in counts:
prob_of_lbl = counts[lbl] / total_rows
impurity -= prob_of_lbl**2
return impurity
no_mixing = [["SUNNY"], ["SUNNY"]]
test_noMIX = pd.DataFrame(no_mixing, columns=["Fruit"])
# this will return 0
gini(test_noMIX)
def info_gain(left, right, current_uncertainty):
"""Information Gain.
The uncertainty of the starting node, minus the weighted impurity of
two child nodes.
"""
p = float(len(left)) / (len(left) + len(right))
return current_uncertainty - p * gini(left) - (1 - p) * gini(right)
current_uncertainty = gini(df)
current_uncertainty
true_rows
true_rows, false_rows = partition(df, Question(0, "SUNNY"))
info_gain(true_rows, false_rows, current_uncertainty)
invoke = ["SUNNY", "CLOUDY", "RAINY"]
for i in invoke:
true_rows, false_rows = partition(df, Question(0, i))
print(i, ":", info_gain(true_rows, false_rows, current_uncertainty))
true_rows
def find_best_split(df):
"""Find the best question to ask by iterating over every feature / value
and calculating the information gain."""
best_gain = 0
best_question = None
current_uncertainty = gini(df)
n_features = len(df.columns) - 1
for col in range(n_features):
values = df.iloc[:, col].unique()
for val in values:
question = Question(col, val)
true_rows, false_rows = partition(df, question)
if len(true_rows) == 0 or len(false_rows) == 0:
continue
gain = info_gain(true_rows, false_rows, current_uncertainty)
if gain >= best_gain:
best_gain, best_question = gain, question
return best_gain, best_question
best_gain, best_question = find_best_split(df)
best_question
class Leaf:
"""A Leaf node classifies data.
This holds a dictionary of class (e.g., "Apple") -> number of times
it appears in the rows from the training data that reach this leaf.
"""
def __init__(self, df):
self.predictions = class_counts(df)
class Decision_Node:
"""A Decision Node asks a question.
This holds a reference to the question (column name), and the corresponding values that lead to the true branch and false branch.
"""
def __init__(self, question, true_values, false_values):
self.question = question
self.true_values = true_values
self.false_values = false_values
def partition(self, dataframe):
"""Partition the dataframe based on the question."""
true_data = dataframe[dataframe[self.question].isin(self.true_values)]
false_data = dataframe[dataframe[self.question].isin(self.false_values)]
return true_data, false_data
def build_tree(df):
# Try partitioning the dataset on each of the unique attribute,
# calculate the information gain,
# and return the question that produces the highest gain.
gain, question = find_best_split(df)
# Base case: no further info gain
# Since we can ask no further questions,
# we'll return a leaf.
if gain == 0:
return Leaf(df)
# If we reach here, we have found a useful feature / value
# to partition on.
true_rows, false_rows = partition(df, question)
# Recursively build the true branch.
true_branch = build_tree(true_rows)
# Recursively build the false branch.
false_branch = build_tree(false_rows)
# Return a Question node.
# This records the best feature / value to ask at this point,
# as well as the branches to follow
# depending on the answer.
return Decision_Node(question, true_branch, false_branch)
build_tree(df)
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import LabelEncoder
import pandas as pd
# Define the dataset
data = [
["SUNNY", "HOT", "HIGH", False, "NO"],
["SUNNY", "HOT", "HIGH", True, "NO"],
["CLOUDY", "HOT", "HIGH", False, "YES"],
["RAINY", "MILD", "HIGH", False, "YES"],
["RAINY", "COOL", "NORMAL", False, "YES"],
["RAINY", "COOL", "NORMAL", True, "YES"],
["CLOUDY", "COOL", "NORMAL", True, "YES"],
["SUNNY", "MILD", "HIGH", False, "NO"],
["SUNNY", "COOL", "NORMAL", False, "YES"],
["RAINY", "MILD", "NORMAL", False, "YES"],
["SUNNY", "MILD", "NORMAL", True, "YES"],
["CLOUDY", "MILD", "HIGH", True, "YES"],
["CLOUDY", "HOT", "NORMAL", False, "YES"],
["RAINY", "MILD", "HIGH", True, "NO"],
]
# Convert the dataset to a pandas DataFrame
df = pd.DataFrame(
data, columns=["Outlook", "Temperature", "Humidity", "Windy", "Label"]
)
# Encode categorical features
label_encoder = LabelEncoder()
for feature in df.columns[:-1]:
df[feature] = label_encoder.fit_transform(df[feature])
# Separate the features and labels
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
# Create the decision tree classifier
clf = DecisionTreeClassifier()
# Train the decision tree classifier
clf.fit(X, y)
# Make predictions
new_data = [[2, 1, 0, 0]]
predicted_label = clf.predict(new_data)
print("Predicted label:", predicted_label)
| false | 0 | 2,353 | 0 | 2,353 | 2,353 |
||
129598879
|
<jupyter_start><jupyter_text>Mushroom Classification
### Context
Although this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as "shrooming") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?
### Content
This dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like "leaflets three, let it be'' for Poisonous Oak and Ivy.
- **Time period**: Donated to UCI ML 27 April 1987
### Inspiration
- What types of machine learning models perform best on this dataset?
- Which features are most indicative of a poisonous mushroom?
Kaggle dataset identifier: mushroom-classification
<jupyter_script># Hanming Jing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import roc_curve, auc, f1_score
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
df = pd.read_csv("../input/mushroom-classification/mushrooms.csv")
encoder = LabelEncoder()
df = df.apply(encoder.fit_transform)
df.head()
X = df.drop(columns=["class"])
y = df["class"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print("X_train = ", X_train.shape)
print("y_train = ", y_train.shape)
print("X_test = ", X_test.shape)
print("y_test = ", y_test.shape)
rfc = RandomForestClassifier(n_estimators=100, random_state=0)
gnb = GaussianNB()
rfc.fit(X_train, y_train)
gnb.fit(X_train, y_train)
rfc_probs = rfc.predict_proba(X_test)[:, 1]
gnb_probs = gnb.predict_proba(X_test)[:, 1]
rfc_fpr, rfc_tpr, _ = roc_curve(y_test, rfc_probs)
gnb_fpr, gnb_tpr, _ = roc_curve(y_test, gnb_probs)
rfc_auc = auc(rfc_fpr, rfc_tpr)
gnb_auc = auc(gnb_fpr, gnb_tpr)
plt.figure(figsize=(8, 6))
plt.plot(rfc_fpr, rfc_tpr, label="Random Forest (AUC = %0.2f)" % rfc_auc)
plt.plot(gnb_fpr, gnb_tpr, label="Gaussian NB (AUC = %0.2f)" % gnb_auc)
plt.plot([0, 1], [0, 1], "k--")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic")
plt.legend(loc="lower right")
plt.show()
rfc_preds = rfc.predict(X_test)
gnb_preds = gnb.predict(X_test)
rfc_f1 = f1_score(y_test, rfc_preds)
gnb_f1 = f1_score(y_test, gnb_preds)
print("Random Forest F1 Score: %.2f" % rfc_f1)
print("Bayes F1 Score: %.2f" % gnb_f1)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/598/129598879.ipynb
|
mushroom-classification
| null |
[{"Id": 129598879, "ScriptId": 38535384, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13332464, "CreationDate": "05/15/2023 06:32:42", "VersionNumber": 2.0, "Title": "mushroom classification", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 4.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 68.0, "LinesInsertedFromFork": 39.0, "LinesDeletedFromFork": 57.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 33.0, "TotalVotes": 0}]
|
[{"Id": 185830475, "KernelVersionId": 129598879, "SourceDatasetVersionId": 974}]
|
[{"Id": 974, "DatasetId": 478, "DatasourceVersionId": 974, "CreatorUserId": 495305, "LicenseName": "CC0: Public Domain", "CreationDate": "12/01/2016 23:08:00", "VersionNumber": 1.0, "Title": "Mushroom Classification", "Slug": "mushroom-classification", "Subtitle": "Safe to eat or deadly poison?", "Description": "### Context\n\nAlthough this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as \"shrooming\") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?\n\n### Content \n\nThis dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like \"leaflets three, let it be'' for Poisonous Oak and Ivy.\n\n- **Time period**: Donated to UCI ML 27 April 1987\n\n### Inspiration\n\n- What types of machine learning models perform best on this dataset?\n\n- Which features are most indicative of a poisonous mushroom?\n\n### Acknowledgements\n\nThis dataset was originally donated to the UCI Machine Learning repository. You can learn more about past research using the data [here][1]. \n\n#[Start a new kernel][2]\n\n\n [1]: https://archive.ics.uci.edu/ml/datasets/Mushroom\n [2]: https://www.kaggle.com/uciml/mushroom-classification/kernels?modal=true", "VersionNotes": "Initial release", "TotalCompressedBytes": 374003.0, "TotalUncompressedBytes": 374003.0}]
|
[{"Id": 478, "CreatorUserId": 495305, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 974.0, "CurrentDatasourceVersionId": 974.0, "ForumId": 2099, "Type": 2, "CreationDate": "12/01/2016 23:08:00", "LastActivityDate": "02/06/2018", "TotalViews": 873597, "TotalDownloads": 114985, "TotalVotes": 2206, "TotalKernels": 1371}]
| null |
# Hanming Jing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import roc_curve, auc, f1_score
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
df = pd.read_csv("../input/mushroom-classification/mushrooms.csv")
encoder = LabelEncoder()
df = df.apply(encoder.fit_transform)
df.head()
X = df.drop(columns=["class"])
y = df["class"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print("X_train = ", X_train.shape)
print("y_train = ", y_train.shape)
print("X_test = ", X_test.shape)
print("y_test = ", y_test.shape)
rfc = RandomForestClassifier(n_estimators=100, random_state=0)
gnb = GaussianNB()
rfc.fit(X_train, y_train)
gnb.fit(X_train, y_train)
rfc_probs = rfc.predict_proba(X_test)[:, 1]
gnb_probs = gnb.predict_proba(X_test)[:, 1]
rfc_fpr, rfc_tpr, _ = roc_curve(y_test, rfc_probs)
gnb_fpr, gnb_tpr, _ = roc_curve(y_test, gnb_probs)
rfc_auc = auc(rfc_fpr, rfc_tpr)
gnb_auc = auc(gnb_fpr, gnb_tpr)
plt.figure(figsize=(8, 6))
plt.plot(rfc_fpr, rfc_tpr, label="Random Forest (AUC = %0.2f)" % rfc_auc)
plt.plot(gnb_fpr, gnb_tpr, label="Gaussian NB (AUC = %0.2f)" % gnb_auc)
plt.plot([0, 1], [0, 1], "k--")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic")
plt.legend(loc="lower right")
plt.show()
rfc_preds = rfc.predict(X_test)
gnb_preds = gnb.predict(X_test)
rfc_f1 = f1_score(y_test, rfc_preds)
gnb_f1 = f1_score(y_test, gnb_preds)
print("Random Forest F1 Score: %.2f" % rfc_f1)
print("Bayes F1 Score: %.2f" % gnb_f1)
| false | 0 | 765 | 0 | 1,067 | 765 |
||
129146739
|
<jupyter_start><jupyter_text>Data science DAY1 Titanic
Kaggle dataset identifier: data-science-day1-titanic
<jupyter_script># 重庆邮电大学 《人工智能与机器学习》实验课 第2次实验 基础代码
# 利用神经网络模型预测泰坦尼克号乘客能否幸免遇难
# 版本2023.05.10.16.00
# 实验要求:
# 1 理解掌握基础代码
# 2 改良基础代码,提升预测准确率
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
import tensorflow.compat.v1 as tf
import pandas as pd
tf.disable_v2_behavior()
# 定义训练、测试和输出文件名
train_file = "./titanic/train.csv"
test_file = "./titanic/test.csv"
reslut_file = "./titanic/myreslut.csv" # 预测结果写为myresult.csv文件,以便上传到比赛网址进行评分
def train():
# 获取训练数据
train_data = np.genfromtxt(
train_file, dtype=float, delimiter=",", skip_header=1, usecols=(6, 10)
) # 之所以这里是6和10,不是5和9,是因为Name属性有逗号分割为了姓和名2个属性
train_label = np.genfromtxt(
train_file, dtype=float, delimiter=",", skip_header=1, usecols=(1)
)
# 填充缺失值为平均值
tmp = np.nanmean(train_data[:, 0])
np.nan_to_num(train_data[:, 0], nan=tmp, copy=False)
tmp = np.nanmean(train_data[:, 1])
np.nan_to_num(train_data[:, 1], nan=tmp, copy=False)
# 数据规范化
train_data = train_data / 1000 # 年龄都在100以内,费用都在1000以内,直接除以1000全部规范化到[0,1]区间内
# 搭建神经网络模型
model = Sequential()
model.add(Dense(units=4, input_dim=2)) # 2个输入(根据年龄和消费金额),units个输出
model.add(
Dense(units=1, activation="sigmoid")
) # 根据上一层默认输出数作为输入数,1输出(输出获救还是遇难),激活函数sigmoid
# 编译模型
model.compile(loss="binary_crossentropy", optimizer="sgd", metrics=["accuracy"])
# 训练
t = 0 # 记录迭代次数
T = 50000 # 设置迭代次数
print("开始训练:")
while t < T:
loss, acc = model.train_on_batch(train_data, train_label)
if t % 100 == 0:
print(t, ": loss=", loss, "; acc=", acc)
t = t + 1 # 累计迭代次数
print("训练完成:")
print("共训练", t, "轮")
print("最终: loss=", loss, "; acc=", acc)
return model
def test(model):
# 获取测试数据
test_data = np.genfromtxt(
test_file, dtype=float, delimiter=",", skip_header=1, usecols=(5, 9)
)
# 填充缺失值为平均值
tmp = np.nanmean(test_data[:, 0])
np.nan_to_num(test_data[:, 0], nan=tmp, copy=False)
tmp = np.nanmean(test_data[:, 1])
np.nan_to_num(test_data[:, 1], nan=tmp, copy=False)
# 数据规范化
test_data = test_data / 1000 # 年龄都在100以内,费用都在1000以内,直接除以1000全部规范化到[0,1]区间内
# 预测
print("开始预测:")
y_hat = model.predict(test_data)
# 小于0.5的标0,大于等于0.5的标1
y_hat[y_hat < 0.5] = 0
y_hat[y_hat >= 0.5] = 1
# 输出结果文件
tmp = np.arange(892, 1310) # 第一列为乘客序号,从892号到1309号
result_data = np.c_[tmp, y_hat.astype(int)] # 合并两个向量
np.savetxt(
reslut_file,
result_data,
fmt="%d",
delimiter=",",
header="PassengerId,Survived",
comments="",
)
print("预测完成,结论已写入文件")
if __name__ == "__main__":
model = train() # 训练,返回模型
test(model) # 用模型预测
## 1.导包和读取文件
import os
import numpy as np
import pandas as pd
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
train_data = pd.read_csv("/kaggle/input/titaniccsv/titanic.csv")
print(train_data.info())
## 2.数据清洗
from sklearn.ensemble import RandomForestRegressor
age = train_data[["Age", "Survived", "Fare", "Parch", "SibSp", "Pclass"]]
age_notnull = age.loc[(train_data.Age.notnull())]
age_isnull = age.loc[(train_data.Age.isnull())]
X = age_notnull.values[:, 1:]
Y = age_notnull.values[:, 0]
rfr = RandomForestRegressor(n_estimators=1000, n_jobs=-1)
rfr.fit(X, Y)
predictAges = rfr.predict(age_isnull.values[:, 1:])
train_data.loc[(train_data.Age.isnull()), "Age"] = predictAges
train_data.loc[train_data["Sex"] == "male", "Sex"] = 0
train_data.loc[train_data["Sex"] == "female", "Sex"] = 1
train_data["Embarked"] = train_data["Embarked"].fillna("S")
train_data.loc[train_data["Embarked"] == "S", "Embarked"] = 0
train_data.loc[train_data["Embarked"] == "C", "Embarked"] = 1
train_data.loc[train_data["Embarked"] == "Q", "Embarked"] = 2
train_data.drop(["Cabin"], axis=1, inplace=True)
train_data["Deceased"] = train_data["Survived"].apply(lambda s: 1 - s)
train_data.info()
## 3.模型建立
dataset_X = train_data[["Sex", "Age", "Pclass", "SibSp", "Parch", "Fare"]]
dataset_Y = train_data[["Deceased", "Survived"]]
from sklearn.model_selection import train_test_split
X_train, X_val, Y_train, Y_val = train_test_split(
dataset_X.iloc[:, :].values,
dataset_Y.iloc[:, :].values,
test_size=0.2,
random_state=42,
)
x = tf.placeholder(tf.float32, shape=[None, 6], name="input")
y = tf.placeholder(tf.float32, shape=[None, 2], name="label")
weights1 = tf.Variable(tf.random_normal([6, 6]), name="weights1")
bias1 = tf.Variable(tf.zeros([6]), name="bias1")
a = tf.nn.relu(tf.matmul(x, weights1) + bias1)
weights2 = tf.Variable(tf.random_normal([6, 2]), name="weights2")
bias2 = tf.Variable(tf.zeros([2]), name="bias2")
z = tf.matmul(a, weights2) + bias2
y_pred = tf.nn.softmax(z)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=z))
correct_pred = tf.equal(tf.argmax(y, 1), tf.argmax(y_pred, 1))
acc_op = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
train_op = tf.train.AdamOptimizer(0.001).minimize(cost)
# 存档入口
# saver = tf.train.Saver()
# 在Saver声明之后定义的变量将不会被存储
# non_storable_variable = tf.Variable(777)
# ckpt_dir = './ckpt_dir'
# if not os.path.exists(ckpt_dir):
# os.makedirs(ckpt_dir)
with tf.Session() as sess:
tf.global_variables_initializer().run()
# ckpt = tf.train.latest_checkpoint(ckpt_dir)
# if ckpt:
# print('Restoring from checkpoint: %s' % ckpt)
# saver.restore(sess, ckpt)
for epoch in range(30):
total_loss = 0.0
for i in range(len(X_train)):
feed_dict = {x: [X_train[i]], y: [Y_train[i]]}
_, loss = sess.run([train_op, cost], feed_dict=feed_dict)
total_loss += loss
print("Epoch: %4d, total loss = %.12f" % (epoch, total_loss))
if epoch % 10 == 0:
accuracy = sess.run(acc_op, feed_dict={x: X_val, y: Y_val})
print("Accuracy on validation set: %.9f" % accuracy)
saver.save(sess, ckpt_dir + "/logistic.ckpt")
print("training complete!")
accuracy = sess.run(acc_op, feed_dict={x: X_val, y: Y_val})
print("Accuracy on validation set: %.9f" % accuracy)
pred = sess.run(y_pred, feed_dict={x: X_val})
correct = np.equal(np.argmax(pred, 1), np.argmax(Y_val, 1))
numpy_accuracy = np.mean(correct.astype(np.float32))
print("Accuracy on validation set (numpy): %.9f" % numpy_accuracy)
# saver.save(sess, ckpt_dir + '/logistic.ckpt')
"""
测试数据的清洗和训练数据一样,两者可以共同完成
"""
# 读测试数据
test_data = pd.read_csv("/kaggle/input/titaniccsv/test.csv")
# 数据清洗, 数据预处理
test_data.loc[test_data["Sex"] == "male", "Sex"] = 0
test_data.loc[test_data["Sex"] == "female", "Sex"] = 1
age = test_data[["Age", "Sex", "Parch", "SibSp", "Pclass"]]
age_notnull = age.loc[(test_data.Age.notnull())]
age_isnull = age.loc[(test_data.Age.isnull())]
X = age_notnull.values[:, 1:]
Y = age_notnull.values[:, 0]
rfr = RandomForestRegressor(n_estimators=1000, n_jobs=-1)
rfr.fit(X, Y)
predictAges = rfr.predict(age_isnull.values[:, 1:])
test_data.loc[(test_data.Age.isnull()), "Age"] = predictAges
test_data["Embarked"] = test_data["Embarked"].fillna("S")
test_data.loc[test_data["Embarked"] == "S", "Embarked"] = 0
test_data.loc[test_data["Embarked"] == "C", "Embarked"] = 1
test_data.loc[test_data["Embarked"] == "Q", "Embarked"] = 2
test_data.drop(["Cabin"], axis=1, inplace=True)
# 特征选择
X_test = test_data[["Sex", "Age", "Pclass", "SibSp", "Parch", "Fare"]]
# 评估模型
predictions = np.argmax(sess.run(y_pred, feed_dict={x: X_test}), 1)
# 保存结果
submission = pd.DataFrame(
{"PassengerId": test_data["PassengerId"], "Survived": predictions}
)
submission.to_csv("/kaggle/working/machine-learning-homework-2.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/146/129146739.ipynb
|
data-science-day1-titanic
|
soutarokirihara
|
[{"Id": 129146739, "ScriptId": 38384735, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14936350, "CreationDate": "05/11/2023 10:42:47", "VersionNumber": 1.0, "Title": "notebook71db19d4f8", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 227.0, "LinesInsertedFromPrevious": 227.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184937529, "KernelVersionId": 129146739, "SourceDatasetVersionId": 2080558}, {"Id": 184937530, "KernelVersionId": 129146739, "SourceDatasetVersionId": 2361242}]
|
[{"Id": 2080558, "DatasetId": 1247358, "DatasourceVersionId": 2120923, "CreatorUserId": 7088777, "LicenseName": "Unknown", "CreationDate": "04/02/2021 13:27:16", "VersionNumber": 1.0, "Title": "Data science DAY1 Titanic", "Slug": "data-science-day1-titanic", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1247358, "CreatorUserId": 7088777, "OwnerUserId": 7088777.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2080558.0, "CurrentDatasourceVersionId": 2120923.0, "ForumId": 1265674, "Type": 2, "CreationDate": "04/02/2021 13:27:16", "LastActivityDate": "04/02/2021", "TotalViews": 6175, "TotalDownloads": 385, "TotalVotes": 16, "TotalKernels": 88}]
|
[{"Id": 7088777, "UserName": "soutarokirihara", "DisplayName": "Soutaro Kirihara", "RegisterDate": "04/02/2021", "PerformanceTier": 0}]
|
# 重庆邮电大学 《人工智能与机器学习》实验课 第2次实验 基础代码
# 利用神经网络模型预测泰坦尼克号乘客能否幸免遇难
# 版本2023.05.10.16.00
# 实验要求:
# 1 理解掌握基础代码
# 2 改良基础代码,提升预测准确率
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
import tensorflow.compat.v1 as tf
import pandas as pd
tf.disable_v2_behavior()
# 定义训练、测试和输出文件名
train_file = "./titanic/train.csv"
test_file = "./titanic/test.csv"
reslut_file = "./titanic/myreslut.csv" # 预测结果写为myresult.csv文件,以便上传到比赛网址进行评分
def train():
# 获取训练数据
train_data = np.genfromtxt(
train_file, dtype=float, delimiter=",", skip_header=1, usecols=(6, 10)
) # 之所以这里是6和10,不是5和9,是因为Name属性有逗号分割为了姓和名2个属性
train_label = np.genfromtxt(
train_file, dtype=float, delimiter=",", skip_header=1, usecols=(1)
)
# 填充缺失值为平均值
tmp = np.nanmean(train_data[:, 0])
np.nan_to_num(train_data[:, 0], nan=tmp, copy=False)
tmp = np.nanmean(train_data[:, 1])
np.nan_to_num(train_data[:, 1], nan=tmp, copy=False)
# 数据规范化
train_data = train_data / 1000 # 年龄都在100以内,费用都在1000以内,直接除以1000全部规范化到[0,1]区间内
# 搭建神经网络模型
model = Sequential()
model.add(Dense(units=4, input_dim=2)) # 2个输入(根据年龄和消费金额),units个输出
model.add(
Dense(units=1, activation="sigmoid")
) # 根据上一层默认输出数作为输入数,1输出(输出获救还是遇难),激活函数sigmoid
# 编译模型
model.compile(loss="binary_crossentropy", optimizer="sgd", metrics=["accuracy"])
# 训练
t = 0 # 记录迭代次数
T = 50000 # 设置迭代次数
print("开始训练:")
while t < T:
loss, acc = model.train_on_batch(train_data, train_label)
if t % 100 == 0:
print(t, ": loss=", loss, "; acc=", acc)
t = t + 1 # 累计迭代次数
print("训练完成:")
print("共训练", t, "轮")
print("最终: loss=", loss, "; acc=", acc)
return model
def test(model):
# 获取测试数据
test_data = np.genfromtxt(
test_file, dtype=float, delimiter=",", skip_header=1, usecols=(5, 9)
)
# 填充缺失值为平均值
tmp = np.nanmean(test_data[:, 0])
np.nan_to_num(test_data[:, 0], nan=tmp, copy=False)
tmp = np.nanmean(test_data[:, 1])
np.nan_to_num(test_data[:, 1], nan=tmp, copy=False)
# 数据规范化
test_data = test_data / 1000 # 年龄都在100以内,费用都在1000以内,直接除以1000全部规范化到[0,1]区间内
# 预测
print("开始预测:")
y_hat = model.predict(test_data)
# 小于0.5的标0,大于等于0.5的标1
y_hat[y_hat < 0.5] = 0
y_hat[y_hat >= 0.5] = 1
# 输出结果文件
tmp = np.arange(892, 1310) # 第一列为乘客序号,从892号到1309号
result_data = np.c_[tmp, y_hat.astype(int)] # 合并两个向量
np.savetxt(
reslut_file,
result_data,
fmt="%d",
delimiter=",",
header="PassengerId,Survived",
comments="",
)
print("预测完成,结论已写入文件")
if __name__ == "__main__":
model = train() # 训练,返回模型
test(model) # 用模型预测
## 1.导包和读取文件
import os
import numpy as np
import pandas as pd
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
train_data = pd.read_csv("/kaggle/input/titaniccsv/titanic.csv")
print(train_data.info())
## 2.数据清洗
from sklearn.ensemble import RandomForestRegressor
age = train_data[["Age", "Survived", "Fare", "Parch", "SibSp", "Pclass"]]
age_notnull = age.loc[(train_data.Age.notnull())]
age_isnull = age.loc[(train_data.Age.isnull())]
X = age_notnull.values[:, 1:]
Y = age_notnull.values[:, 0]
rfr = RandomForestRegressor(n_estimators=1000, n_jobs=-1)
rfr.fit(X, Y)
predictAges = rfr.predict(age_isnull.values[:, 1:])
train_data.loc[(train_data.Age.isnull()), "Age"] = predictAges
train_data.loc[train_data["Sex"] == "male", "Sex"] = 0
train_data.loc[train_data["Sex"] == "female", "Sex"] = 1
train_data["Embarked"] = train_data["Embarked"].fillna("S")
train_data.loc[train_data["Embarked"] == "S", "Embarked"] = 0
train_data.loc[train_data["Embarked"] == "C", "Embarked"] = 1
train_data.loc[train_data["Embarked"] == "Q", "Embarked"] = 2
train_data.drop(["Cabin"], axis=1, inplace=True)
train_data["Deceased"] = train_data["Survived"].apply(lambda s: 1 - s)
train_data.info()
## 3.模型建立
dataset_X = train_data[["Sex", "Age", "Pclass", "SibSp", "Parch", "Fare"]]
dataset_Y = train_data[["Deceased", "Survived"]]
from sklearn.model_selection import train_test_split
X_train, X_val, Y_train, Y_val = train_test_split(
dataset_X.iloc[:, :].values,
dataset_Y.iloc[:, :].values,
test_size=0.2,
random_state=42,
)
x = tf.placeholder(tf.float32, shape=[None, 6], name="input")
y = tf.placeholder(tf.float32, shape=[None, 2], name="label")
weights1 = tf.Variable(tf.random_normal([6, 6]), name="weights1")
bias1 = tf.Variable(tf.zeros([6]), name="bias1")
a = tf.nn.relu(tf.matmul(x, weights1) + bias1)
weights2 = tf.Variable(tf.random_normal([6, 2]), name="weights2")
bias2 = tf.Variable(tf.zeros([2]), name="bias2")
z = tf.matmul(a, weights2) + bias2
y_pred = tf.nn.softmax(z)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=z))
correct_pred = tf.equal(tf.argmax(y, 1), tf.argmax(y_pred, 1))
acc_op = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
train_op = tf.train.AdamOptimizer(0.001).minimize(cost)
# 存档入口
# saver = tf.train.Saver()
# 在Saver声明之后定义的变量将不会被存储
# non_storable_variable = tf.Variable(777)
# ckpt_dir = './ckpt_dir'
# if not os.path.exists(ckpt_dir):
# os.makedirs(ckpt_dir)
with tf.Session() as sess:
tf.global_variables_initializer().run()
# ckpt = tf.train.latest_checkpoint(ckpt_dir)
# if ckpt:
# print('Restoring from checkpoint: %s' % ckpt)
# saver.restore(sess, ckpt)
for epoch in range(30):
total_loss = 0.0
for i in range(len(X_train)):
feed_dict = {x: [X_train[i]], y: [Y_train[i]]}
_, loss = sess.run([train_op, cost], feed_dict=feed_dict)
total_loss += loss
print("Epoch: %4d, total loss = %.12f" % (epoch, total_loss))
if epoch % 10 == 0:
accuracy = sess.run(acc_op, feed_dict={x: X_val, y: Y_val})
print("Accuracy on validation set: %.9f" % accuracy)
saver.save(sess, ckpt_dir + "/logistic.ckpt")
print("training complete!")
accuracy = sess.run(acc_op, feed_dict={x: X_val, y: Y_val})
print("Accuracy on validation set: %.9f" % accuracy)
pred = sess.run(y_pred, feed_dict={x: X_val})
correct = np.equal(np.argmax(pred, 1), np.argmax(Y_val, 1))
numpy_accuracy = np.mean(correct.astype(np.float32))
print("Accuracy on validation set (numpy): %.9f" % numpy_accuracy)
# saver.save(sess, ckpt_dir + '/logistic.ckpt')
"""
测试数据的清洗和训练数据一样,两者可以共同完成
"""
# 读测试数据
test_data = pd.read_csv("/kaggle/input/titaniccsv/test.csv")
# 数据清洗, 数据预处理
test_data.loc[test_data["Sex"] == "male", "Sex"] = 0
test_data.loc[test_data["Sex"] == "female", "Sex"] = 1
age = test_data[["Age", "Sex", "Parch", "SibSp", "Pclass"]]
age_notnull = age.loc[(test_data.Age.notnull())]
age_isnull = age.loc[(test_data.Age.isnull())]
X = age_notnull.values[:, 1:]
Y = age_notnull.values[:, 0]
rfr = RandomForestRegressor(n_estimators=1000, n_jobs=-1)
rfr.fit(X, Y)
predictAges = rfr.predict(age_isnull.values[:, 1:])
test_data.loc[(test_data.Age.isnull()), "Age"] = predictAges
test_data["Embarked"] = test_data["Embarked"].fillna("S")
test_data.loc[test_data["Embarked"] == "S", "Embarked"] = 0
test_data.loc[test_data["Embarked"] == "C", "Embarked"] = 1
test_data.loc[test_data["Embarked"] == "Q", "Embarked"] = 2
test_data.drop(["Cabin"], axis=1, inplace=True)
# 特征选择
X_test = test_data[["Sex", "Age", "Pclass", "SibSp", "Parch", "Fare"]]
# 评估模型
predictions = np.argmax(sess.run(y_pred, feed_dict={x: X_test}), 1)
# 保存结果
submission = pd.DataFrame(
{"PassengerId": test_data["PassengerId"], "Survived": predictions}
)
submission.to_csv("/kaggle/working/machine-learning-homework-2.csv", index=False)
| false | 2 | 3,148 | 0 | 3,177 | 3,148 |
||
129146863
|
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=30):
return L(ddg_images(term, max_results=max_images)).itemgot("image")
urls = search_images("pokemon", max_images=10)
urls
directory = Path("pokemon_or_not")
from fastdownload import download_url
searches = ["pokemon", "golden retriever"]
root_file_name_pokemon = "pokemon"
root_file_name_golden_retriever = "golden_retriever"
file_ext = ".jpg"
for search in searches:
path = directory / search
path.mkdir(exist_ok=True, parents=True)
download_images(path, urls=search_images(f"{search} photo"))
resize_images(directory / search, max_size=400, dest=directory / search)
data_loaders = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method="squish")],
).dataloaders(directory, bs=32)
data_loaders.show_batch(max_n=10)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/146/129146863.ipynb
| null | null |
[{"Id": 129146863, "ScriptId": 38392733, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14949693, "CreationDate": "05/11/2023 10:44:06", "VersionNumber": 2.0, "Title": "Pokemon identification", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 36.0, "LinesInsertedFromPrevious": 23.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 13.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=30):
return L(ddg_images(term, max_results=max_images)).itemgot("image")
urls = search_images("pokemon", max_images=10)
urls
directory = Path("pokemon_or_not")
from fastdownload import download_url
searches = ["pokemon", "golden retriever"]
root_file_name_pokemon = "pokemon"
root_file_name_golden_retriever = "golden_retriever"
file_ext = ".jpg"
for search in searches:
path = directory / search
path.mkdir(exist_ok=True, parents=True)
download_images(path, urls=search_images(f"{search} photo"))
resize_images(directory / search, max_size=400, dest=directory / search)
data_loaders = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method="squish")],
).dataloaders(directory, bs=32)
data_loaders.show_batch(max_n=10)
| false | 0 | 322 | 0 | 322 | 322 |
||
129131236
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import (
OrdinalEncoder,
OneHotEncoder,
StandardScaler,
MinMaxScaler,
)
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay
from sklearn import datasets
from sklearn.datasets import fetch_openml
from itertools import product
import warnings
# Load the digits dataset
X, y = datasets.load_digits(return_X_y=True)
# MNIST
Xm, ym = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False)
# #### Многоклассовая классификация
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=53
)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# ##### Проверка на нормализованных с помощью StandartScaler данных
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# ##### Классификация второго набора
Xm_train, Xm_test, ym_train, ym_test = train_test_split(
Xm, ym, test_size=0.3, random_state=53
)
clf = MLPClassifier()
clf.fit(Xm_train, ym_train)
clf.score(Xm_test, ym_test)
# ##### Нормализация c помощью StandartScaler и проверка
scaler = StandardScaler()
scaler.fit(Xm_train)
Xm_train = scaler.transform(Xm_train)
Xm_test = scaler.transform(Xm_test)
clf = MLPClassifier()
clf.fit(Xm_train, ym_train)
clf.score(Xm_test, ym_test)
# #### Бинарная классификация на чётные и нечётные цифры
# целевой критерий с двумя классами
y_binary = [1 if val % 2 != 0 else 0 for val in y]
# ##### Разбиение на выборки и проверка классификаторов
X_train, X_test, y_train, y_test = train_test_split(
X, y_binary, test_size=0.2, random_state=42
)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# Точность очень высокая, но проверим еще с использованием нормализации
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# ф-я проверки работы классификатора mlp
def MLP_score(X_train, X_test, y_train, y_test):
clf = MLPClassifier()
clf.fit(X_train, y_train)
print(clf.score(X_test, y_test))
# Приведение категориального типа целевого признака к категориальному
ym = ym.astype(np.uint8)
ym_binary = [1 if val % 2 != 0 else 0 for val in ym]
Xm_train, Xm_test, ym_train, ym_test = train_test_split(
Xm, ym_binary, test_size=0.2, random_state=42
)
MLP_score(Xm_train, Xm_test, ym_train, ym_test)
# #### Бинарная классификация на '0' и остальные цифры
# Новый целевой критерий с двумя классами и следующими условиями
y_binary = [0 if val == 0 else 1 for val in y]
ym_binary = [0 if val == 0 else 1 for val in ym]
# Для первого набора
X_train, X_test, y_train, y_test = train_test_split(
X, y_binary, test_size=0.2, random_state=42
)
MLP_score(X_train, X_test, y_train, y_test)
# Для второго набора
Xm_train, Xm_test, ym_train, ym_test = train_test_split(
Xm, ym_binary, test_size=0.2, random_state=42
)
MLP_score(Xm_train, Xm_test, ym_train, ym_test)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
predictions = clf.predict(X_test)
cm = confusion_matrix(y_test, predictions, labels=clf.classes_)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=clf.classes_)
disp.plot()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/131/129131236.ipynb
| null | null |
[{"Id": 129131236, "ScriptId": 38345921, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6796004, "CreationDate": "05/11/2023 08:23:15", "VersionNumber": 1.0, "Title": "t6_digits", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 147.0, "LinesInsertedFromPrevious": 147.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import (
OrdinalEncoder,
OneHotEncoder,
StandardScaler,
MinMaxScaler,
)
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay
from sklearn import datasets
from sklearn.datasets import fetch_openml
from itertools import product
import warnings
# Load the digits dataset
X, y = datasets.load_digits(return_X_y=True)
# MNIST
Xm, ym = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False)
# #### Многоклассовая классификация
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=53
)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# ##### Проверка на нормализованных с помощью StandartScaler данных
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# ##### Классификация второго набора
Xm_train, Xm_test, ym_train, ym_test = train_test_split(
Xm, ym, test_size=0.3, random_state=53
)
clf = MLPClassifier()
clf.fit(Xm_train, ym_train)
clf.score(Xm_test, ym_test)
# ##### Нормализация c помощью StandartScaler и проверка
scaler = StandardScaler()
scaler.fit(Xm_train)
Xm_train = scaler.transform(Xm_train)
Xm_test = scaler.transform(Xm_test)
clf = MLPClassifier()
clf.fit(Xm_train, ym_train)
clf.score(Xm_test, ym_test)
# #### Бинарная классификация на чётные и нечётные цифры
# целевой критерий с двумя классами
y_binary = [1 if val % 2 != 0 else 0 for val in y]
# ##### Разбиение на выборки и проверка классификаторов
X_train, X_test, y_train, y_test = train_test_split(
X, y_binary, test_size=0.2, random_state=42
)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# Точность очень высокая, но проверим еще с использованием нормализации
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# ф-я проверки работы классификатора mlp
def MLP_score(X_train, X_test, y_train, y_test):
clf = MLPClassifier()
clf.fit(X_train, y_train)
print(clf.score(X_test, y_test))
# Приведение категориального типа целевого признака к категориальному
ym = ym.astype(np.uint8)
ym_binary = [1 if val % 2 != 0 else 0 for val in ym]
Xm_train, Xm_test, ym_train, ym_test = train_test_split(
Xm, ym_binary, test_size=0.2, random_state=42
)
MLP_score(Xm_train, Xm_test, ym_train, ym_test)
# #### Бинарная классификация на '0' и остальные цифры
# Новый целевой критерий с двумя классами и следующими условиями
y_binary = [0 if val == 0 else 1 for val in y]
ym_binary = [0 if val == 0 else 1 for val in ym]
# Для первого набора
X_train, X_test, y_train, y_test = train_test_split(
X, y_binary, test_size=0.2, random_state=42
)
MLP_score(X_train, X_test, y_train, y_test)
# Для второго набора
Xm_train, Xm_test, ym_train, ym_test = train_test_split(
Xm, ym_binary, test_size=0.2, random_state=42
)
MLP_score(Xm_train, Xm_test, ym_train, ym_test)
clf = MLPClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
predictions = clf.predict(X_test)
cm = confusion_matrix(y_test, predictions, labels=clf.classes_)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=clf.classes_)
disp.plot()
| false | 0 | 1,603 | 0 | 1,603 | 1,603 |
||
129280823
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # The dataset is about the kids being bullied and the data is collected from a region in USA. The dataset contains data about on-school, off-school, cyber bullying, their age, sex, how many times were they physically attacked and involved in physical fighting, how many close friends they have, whether they skip school without permission etc.
# ### OnSchool_Bullying_12mo : Bullying on school premesis in the past 12 months (Yes, No)
# ### OffSchool_Bullying_12mo : Bullying off school premesis in the past 12 months (Yes, No)
# ### Cyberbullying_12mo : Bullying on internet in the past 12 months (Yes, No)
# ### Custom_Age : Age of the victim
# ### Sex : Gender of the victim
# ### Physically_attacked : how many times were they attacked physically
# ### Physical_fighting : how many times were they involved by themselves in fighting
# ### Felt_lonely : how many times did they feel lonely because of bullying
# ### Close_friends : how many close friends did the victim had
# ### Days_Unexcused_Absence : how many days did the victim didn't attend the school without informing
# ### Supportive_Classmates : how often were their classmates supportive to the victim (rarely, always, sometimes, never etc)
# ### Supportive_Parents : how often were their parents supportive to the victim (rarely, always, sometimes, never etc)
# ### Persistent_Loneliness : did they feel lonely too often (Yes, No)
# ### Unexcused_Absence : did they miss the school without informing (Yes, No)
# ### Underweight : was the victim underweight? (Yes, No, Unknown)
# ### Overweight : was the victim overweight? (Yes, No, Unknown)
# ### Obese : was the victim obese? (Yes, No, Unknown)
bully = pd.read_csv("/kaggle/input/bullying/Bullying.csv")
bully
# #### The column names are too long to be read, hence we rename the columns with appropriate and short names
bully
bully.drop("record", axis=1, inplace=True)
bully
# #### The Custom_Age is supposed to be a numerical value but it is object with strings in it, hence we retrieve the interger values and fill the empty values with mean of existing ages
bully["Custom_Age"].unique()
bully.info()
bully["Custom_Age"] = bully["Custom_Age"].str.extract("(\d+)").astype(float)
bully["Custom_Age"].replace(" ", 14, inplace=True)
bully["Custom_Age"].fillna(14, inplace=True)
bully
bully["Custom_Age"].unique()
# #### In all the columns, there are missing values (not NULL values), hence we replace them with the value which is more frequent in the respective column
bully["OnSchool_Bullying_12mo"].unique()
bully["OnSchool_Bullying_12mo"].replace(" ", "Yes", inplace=True)
bully["OnSchool_Bullying_12mo"].unique()
bully["OffSchool_Bullying_12mo"].unique()
bully["OffSchool_Bullying_12mo"].replace(" ", "Yes", inplace=True)
bully["OffSchool_Bullying_12mo"].unique()
bully["Cyberbullying_12mo"].unique()
bully["Cyberbullying_12mo"].replace(" ", "Yes", inplace=True)
bully["Cyberbullying_12mo"].unique()
bully["Sex"].unique()
bully["Sex"].value_counts()
bully["Sex"].replace(" ", "Male", inplace=True)
bully["Sex"].unique()
bully["Physically_attacked"].unique()
bully["Physically_attacked"].value_counts()
bully["Physically_attacked"].replace(1, "1 time", inplace=True)
bully["Physically_attacked"].value_counts()
bully["Physically_attacked"] = (
bully["Physically_attacked"].str.extract("^(\d+)").astype(float)
)
bully["Physically_attacked"].unique()
bully["Physical_fighting"].unique()
bully["Physical_fighting"].value_counts()
bully["Physical_fighting"].replace(" ", "0 times", inplace=True)
bully["Physical_fighting"].value_counts()
bully["Physical_fighting"] = (
bully["Physical_fighting"].str.extract("^(\d+)").astype(int)
)
bully["Physical_fighting"].unique()
bully["Felt_lonely"].unique()
bully["Felt_lonely"].value_counts()
bully["Felt_lonely"].replace(" ", "Never", inplace=True)
bully["Felt_lonely"].value_counts()
bully["Close_friends"].value_counts()
bully["Close_friends"].replace(" ", "3 or more", inplace=True)
bully["Close_friends"] = bully["Close_friends"].str.extract("^(\d+)").astype(int)
bully["Close_friends"].unique()
bully["Days_Unexcused_Absence"].value_counts()
bully["Days_Unexcused_Absence"].replace(" ", "0 days", inplace=True)
bully["Days_Unexcused_Absence"] = (
bully["Days_Unexcused_Absence"].str.extract("^(\d+)").astype(int)
)
bully["Days_Unexcused_Absence"].unique()
bully["Supportive_Classmates"].value_counts()
bully["Supportive_Classmates"].replace(" ", "Sometimes", inplace=True)
bully["Supportive_Classmates"].unique()
bully["Supportive_Parents"].value_counts()
bully["Supportive_Parents"].replace(" ", "Always", inplace=True)
bully["Supportive_Parents"].unique()
bully["Persistent_Loneliness"].value_counts()
bully["Persistent_Loneliness"].replace(" ", "No", inplace=True)
bully["Persistent_Loneliness"].unique()
bully["Unexcused_Absence"].value_counts()
bully["Unexcused_Absence"].replace(" ", "No", inplace=True)
bully["Unexcused_Absence"].unique()
# #### For the last 3 columns, we could see there are lot of missing values (almost 40%) and we can't drop those rows as well, since it could be resulting in generating different insights than the original insights. Hence, we replace it with 'Unknown' string
bully["Underweight"].value_counts()
bully["Underweight"].replace(" ", "Unknown", inplace=True)
bully["Underweight"].unique()
bully["Overweight"].value_counts()
bully["Overweight"].replace(" ", "Unknown", inplace=True)
bully["Overweight"].unique()
bully["Obese"].value_counts()
bully["Obese"].replace(" ", "Unknown", inplace=True)
bully["Obese"].unique()
bully
bully.describe()
# #### The data seems to be cleaned with no outliers and no duplicates and no missing data. Hence, we proceed with the analysing the data and generating results from it
# ## 1. How prevalent is bullying among the surveyed students?
import matplotlib.pyplot as plt
# Count the frequency of "Yes" for each type of bullying
bullying_counts = bully[
["OnSchool_Bullying_12mo", "OffSchool_Bullying_12mo", "Cyberbullying_12mo"]
].apply(lambda x: x[x == "Yes"].count())
# Define vibrant colors for each bar
colors = ["#FF5F6D", "#FFC371", "#00B9A7"]
# Plot the bar chart
plt.bar(bullying_counts.index, bullying_counts.values, color=colors)
# Add counts on top of each bar
for i, count in enumerate(bullying_counts.values):
plt.text(i, count, str(count), ha="center", va="bottom", fontweight="bold")
# Set the labels and title
plt.xlabel("Bullying Type")
plt.ylabel("Frequency")
plt.title("Prevalence of Bullying Among Surveyed Students")
# Update x-axis tick labels
plt.xticks(range(len(bullying_counts.index)), ["On-School", "Off-School", "Cyber"])
# Show the plot
plt.show()
# #### Students are often bullied on premisis i.e; on school but bullying tends to continue off school and on the internet as well.
# ## 2. Are there any gender differences in bullying experiences?
import matplotlib.pyplot as plt
# Filter the dataset for rows where OnSchool_Bullying_12mo is 'Yes'
on_school_bullying_yes = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Count the occurrences of each gender
on_school_gender_counts = on_school_bullying_yes["Sex"].value_counts()
# Filter the dataset for rows where OffSchool_Bullying_12mo is 'Yes'
off_school_bullying_yes = bully[bully["OffSchool_Bullying_12mo"] == "Yes"]
# Count the occurrences of each gender
off_school_gender_counts = off_school_bullying_yes["Sex"].value_counts()
# Filter the dataset for rows where Cyberbullying_12mo is 'Yes'
cyber_bullying_yes = bully[bully["Cyberbullying_12mo"] == "Yes"]
# Count the occurrences of each gender
cyber_bullying_gender_counts = cyber_bullying_yes["Sex"].value_counts()
# Set the colors for males and females
colors = ["#FF5F6D", "#00B9A7"]
# Plot the stacked bar chart
plt.bar(
["On-School", "Off-School", "Cyber"],
[
on_school_gender_counts["Male"],
off_school_gender_counts["Male"],
cyber_bullying_gender_counts["Male"],
],
color=colors[0],
label="Male",
)
plt.bar(
["On-School", "Off-School", "Cyber"],
[
on_school_gender_counts["Female"],
off_school_gender_counts["Female"],
cyber_bullying_gender_counts["Female"],
],
bottom=[
on_school_gender_counts["Male"],
off_school_gender_counts["Male"],
cyber_bullying_gender_counts["Male"],
],
color=colors[1],
label="Female",
)
# Add count values for males
for i, count in enumerate(
[
on_school_gender_counts["Male"],
off_school_gender_counts["Male"],
cyber_bullying_gender_counts["Male"],
]
):
plt.text(
i,
count / 2,
str(count),
ha="center",
va="center",
color="white",
fontweight="bold",
)
# Add count values for females
for i, count in enumerate(
[
on_school_gender_counts["Female"],
off_school_gender_counts["Female"],
cyber_bullying_gender_counts["Female"],
]
):
plt.text(
i,
count / 2 + on_school_gender_counts["Male"],
str(count),
ha="center",
va="center",
color="white",
fontweight="bold",
)
# Set the labels and title
plt.xlabel("Bullying Type")
plt.ylabel("Count")
plt.title("Gender Differences in Bullying Experiences")
# Add a legend
plt.legend()
# Show the plot
plt.show()
# #### From the above plot, it could be seen that the victims are higher if they are 'Females' in all the categories. In cyber-bullying, it's almost double than that of Male victims.
# ## 3. How does the number of close friends relate to the feeling of loneliness due to bullying?
import matplotlib.pyplot as plt
import numpy as np
# Filter the dataset for rows where bullying is 'Yes'
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Define colors for the bar graph
bar_colors = ["tab:blue", "tab:orange", "tab:green", "tab:red", "tab:purple"]
# Define colors for the line graphs
line_colors = ["tab:gray", "tab:cyan", "tab:pink", "tab:olive"]
# Count the occurrences of each value in 'Felt_Lonely' column
loneliness_counts = bullying_yes_df["Felt_lonely"].value_counts()
# Get unique values of 'Close_friends'
close_friends_values = np.sort(bullying_yes_df["Close_friends"].unique())
# Initialize a figure with two y-axes
fig, ax1 = plt.subplots()
# Plot the bar graph for 'Felt_Lonely' counts
ax1.bar(loneliness_counts.index, loneliness_counts.values, color=bar_colors)
ax1.set_xlabel("Felt Loneliness")
ax1.set_ylabel("Count", color="black")
ax1.tick_params("y", colors="black")
# Create line graphs for each value of 'Close_friends'
ax2 = ax1.twinx()
for i, close_friends_value in enumerate(close_friends_values):
close_friends_count = bullying_yes_df[
bullying_yes_df["Close_friends"] == close_friends_value
]["Felt_lonely"].value_counts()
ax2.plot(
close_friends_count.reindex(loneliness_counts.index).index,
close_friends_count.reindex(loneliness_counts.index).values,
marker="o",
label=f"Close Friends: {close_friends_value}",
color=line_colors[i % len(line_colors)],
)
ax2.set_ylabel("Number of Close Friends")
ax2.tick_params("y")
# Set the title and legends
plt.title("Count of Felt Loneliness and Number of Close Friends")
lines, labels = ax2.get_legend_handles_labels()
ax2.legend(lines, labels, loc="upper right")
# Rotate the x-axis labels vertically
plt.xticks(rotation=90)
# Adjust the layout and show the plot
fig.tight_layout()
plt.show()
# #### Victims tend to feel lonely more only sometimes and number of close friends also have the same effect in feeling lonely, for every count of friend the victim has, the more lonely a victim feels is just sometimes during a period of bullying
# ## 4. Are there any differences in the level of support from classmates and parents?
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Count the occurrences of each level of support from classmates
classmates_support_counts = bullying_yes_df["Supportive_Classmates"].value_counts()
# Count the occurrences of each level of support from parents
parents_support_counts = bullying_yes_df["Supportive_Parents"].value_counts()
# Get the unique support levels
support_levels = bullying_yes_df["Supportive_Classmates"].unique()
# Set the width of the bars
bar_width = 0.35
# Set the positions of the bars on the x-axis
r1 = range(len(support_levels))
r2 = [x + bar_width for x in r1]
# Create a grouped bar chart
plt.bar(
r1, classmates_support_counts, color="tab:blue", width=bar_width, label="Classmates"
)
plt.bar(
r2, parents_support_counts, color="tab:orange", width=bar_width, label="Parents"
)
# Set the x-axis labels and tick positions
plt.xlabel("Support Level")
plt.ylabel("Count")
plt.xticks([r + bar_width / 2 for r in range(len(support_levels))], support_levels)
# Set the title and legend
plt.title("Comparison of Support from Classmates and Parents")
plt.legend()
# Show the plot
plt.show()
# #### From the above graph, it could be said that the parents are more supportive when a child is being bullied rather than friends. The parents are more supportive when the support levels are 'Most of the time' and 'Always' which might be a good sign for a victim
# ## 5. Does persistent loneliness have an impact on school attendance?
import matplotlib.pyplot as plt
import seaborn as sns
# Filter the dataset for rows where bullying is 'Yes'
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Select the attendance data for students with and without persistent loneliness
attendance_with_loneliness = bullying_yes_df[
bullying_yes_df["Persistent_Loneliness"] == "Yes"
]["Days_Unexcused_Absence"]
attendance_without_loneliness = bullying_yes_df[
bullying_yes_df["Persistent_Loneliness"] == "No"
]["Days_Unexcused_Absence"]
# Combine the attendance data into a single DataFrame
attendance_data = pd.DataFrame(
{
"With Persistent Loneliness": attendance_with_loneliness,
"Without Persistent Loneliness": attendance_without_loneliness,
}
)
# Create a violin plot to compare attendance
sns.violinplot(data=attendance_data)
# Set the labels and title
plt.xlabel("Persistent Loneliness")
plt.ylabel("Days of Unexcused Absence")
plt.title("Impact of Persistent Loneliness on School Attendance")
# Show the plot
plt.show()
# #### It could be said that the loneliness might not be a factor for school attendence since a victim with and without loneliness tend to have close to 0 unexcused leaves but students without loneliness tends to be going to school regularly.
# ## 6. Is there a relationship between weight status (underweight, overweight, obese) and bullying experiences?
import matplotlib.pyplot as plt
# Filter the dataset for rows where bullying is 'Yes'
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
on_school_counts = (
bullying_yes_df["Underweight"].value_counts(),
bullying_yes_df["Overweight"].value_counts(),
bullying_yes_df["Obese"].value_counts(),
)
bullying_yes_df = bully[bully["OffSchool_Bullying_12mo"] == "Yes"]
off_school_counts = (
bullying_yes_df["Underweight"].value_counts(),
bullying_yes_df["Overweight"].value_counts(),
bullying_yes_df["Obese"].value_counts(),
)
bullying_yes_df = bully[bully["Cyberbullying_12mo"] == "Yes"]
cyber_counts = (
bullying_yes_df["Underweight"].value_counts(),
bullying_yes_df["Overweight"].value_counts(),
bullying_yes_df["Obese"].value_counts(),
)
# Get the unique weight statuses
weight_statuses = ["On-School", "Off-School", "Cyber"]
# Set the width of the bars
bar_width = 0.3
# Set the positions of the bars on the x-axis
r1 = range(len(weight_statuses))
r2 = [x + bar_width for x in r1]
r3 = [x + 2 * bar_width for x in r1]
# Create a grouped bar chart
plt.bar(r1, on_school_counts[0], color="tab:blue", width=bar_width, label="Underweight")
plt.bar(
r2, on_school_counts[1], color="tab:orange", width=bar_width, label="Overweight"
)
plt.bar(r3, on_school_counts[2], color="tab:green", width=bar_width, label="Obese")
# Set the x-axis labels and tick positions
plt.xlabel("Bullying Type")
plt.ylabel("Count")
plt.xticks([r + bar_width for r in range(len(weight_statuses))], weight_statuses)
# Set the title and legend
plt.title("Weight Status by Bullying Type")
plt.legend()
# Show the plot
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/280/129280823.ipynb
| null | null |
[{"Id": 129280823, "ScriptId": 38393769, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8605824, "CreationDate": "05/12/2023 12:15:58", "VersionNumber": 2.0, "Title": "Bullying_Analysis", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 415.0, "LinesInsertedFromPrevious": 278.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 137.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # The dataset is about the kids being bullied and the data is collected from a region in USA. The dataset contains data about on-school, off-school, cyber bullying, their age, sex, how many times were they physically attacked and involved in physical fighting, how many close friends they have, whether they skip school without permission etc.
# ### OnSchool_Bullying_12mo : Bullying on school premesis in the past 12 months (Yes, No)
# ### OffSchool_Bullying_12mo : Bullying off school premesis in the past 12 months (Yes, No)
# ### Cyberbullying_12mo : Bullying on internet in the past 12 months (Yes, No)
# ### Custom_Age : Age of the victim
# ### Sex : Gender of the victim
# ### Physically_attacked : how many times were they attacked physically
# ### Physical_fighting : how many times were they involved by themselves in fighting
# ### Felt_lonely : how many times did they feel lonely because of bullying
# ### Close_friends : how many close friends did the victim had
# ### Days_Unexcused_Absence : how many days did the victim didn't attend the school without informing
# ### Supportive_Classmates : how often were their classmates supportive to the victim (rarely, always, sometimes, never etc)
# ### Supportive_Parents : how often were their parents supportive to the victim (rarely, always, sometimes, never etc)
# ### Persistent_Loneliness : did they feel lonely too often (Yes, No)
# ### Unexcused_Absence : did they miss the school without informing (Yes, No)
# ### Underweight : was the victim underweight? (Yes, No, Unknown)
# ### Overweight : was the victim overweight? (Yes, No, Unknown)
# ### Obese : was the victim obese? (Yes, No, Unknown)
bully = pd.read_csv("/kaggle/input/bullying/Bullying.csv")
bully
# #### The column names are too long to be read, hence we rename the columns with appropriate and short names
bully
bully.drop("record", axis=1, inplace=True)
bully
# #### The Custom_Age is supposed to be a numerical value but it is object with strings in it, hence we retrieve the interger values and fill the empty values with mean of existing ages
bully["Custom_Age"].unique()
bully.info()
bully["Custom_Age"] = bully["Custom_Age"].str.extract("(\d+)").astype(float)
bully["Custom_Age"].replace(" ", 14, inplace=True)
bully["Custom_Age"].fillna(14, inplace=True)
bully
bully["Custom_Age"].unique()
# #### In all the columns, there are missing values (not NULL values), hence we replace them with the value which is more frequent in the respective column
bully["OnSchool_Bullying_12mo"].unique()
bully["OnSchool_Bullying_12mo"].replace(" ", "Yes", inplace=True)
bully["OnSchool_Bullying_12mo"].unique()
bully["OffSchool_Bullying_12mo"].unique()
bully["OffSchool_Bullying_12mo"].replace(" ", "Yes", inplace=True)
bully["OffSchool_Bullying_12mo"].unique()
bully["Cyberbullying_12mo"].unique()
bully["Cyberbullying_12mo"].replace(" ", "Yes", inplace=True)
bully["Cyberbullying_12mo"].unique()
bully["Sex"].unique()
bully["Sex"].value_counts()
bully["Sex"].replace(" ", "Male", inplace=True)
bully["Sex"].unique()
bully["Physically_attacked"].unique()
bully["Physically_attacked"].value_counts()
bully["Physically_attacked"].replace(1, "1 time", inplace=True)
bully["Physically_attacked"].value_counts()
bully["Physically_attacked"] = (
bully["Physically_attacked"].str.extract("^(\d+)").astype(float)
)
bully["Physically_attacked"].unique()
bully["Physical_fighting"].unique()
bully["Physical_fighting"].value_counts()
bully["Physical_fighting"].replace(" ", "0 times", inplace=True)
bully["Physical_fighting"].value_counts()
bully["Physical_fighting"] = (
bully["Physical_fighting"].str.extract("^(\d+)").astype(int)
)
bully["Physical_fighting"].unique()
bully["Felt_lonely"].unique()
bully["Felt_lonely"].value_counts()
bully["Felt_lonely"].replace(" ", "Never", inplace=True)
bully["Felt_lonely"].value_counts()
bully["Close_friends"].value_counts()
bully["Close_friends"].replace(" ", "3 or more", inplace=True)
bully["Close_friends"] = bully["Close_friends"].str.extract("^(\d+)").astype(int)
bully["Close_friends"].unique()
bully["Days_Unexcused_Absence"].value_counts()
bully["Days_Unexcused_Absence"].replace(" ", "0 days", inplace=True)
bully["Days_Unexcused_Absence"] = (
bully["Days_Unexcused_Absence"].str.extract("^(\d+)").astype(int)
)
bully["Days_Unexcused_Absence"].unique()
bully["Supportive_Classmates"].value_counts()
bully["Supportive_Classmates"].replace(" ", "Sometimes", inplace=True)
bully["Supportive_Classmates"].unique()
bully["Supportive_Parents"].value_counts()
bully["Supportive_Parents"].replace(" ", "Always", inplace=True)
bully["Supportive_Parents"].unique()
bully["Persistent_Loneliness"].value_counts()
bully["Persistent_Loneliness"].replace(" ", "No", inplace=True)
bully["Persistent_Loneliness"].unique()
bully["Unexcused_Absence"].value_counts()
bully["Unexcused_Absence"].replace(" ", "No", inplace=True)
bully["Unexcused_Absence"].unique()
# #### For the last 3 columns, we could see there are lot of missing values (almost 40%) and we can't drop those rows as well, since it could be resulting in generating different insights than the original insights. Hence, we replace it with 'Unknown' string
bully["Underweight"].value_counts()
bully["Underweight"].replace(" ", "Unknown", inplace=True)
bully["Underweight"].unique()
bully["Overweight"].value_counts()
bully["Overweight"].replace(" ", "Unknown", inplace=True)
bully["Overweight"].unique()
bully["Obese"].value_counts()
bully["Obese"].replace(" ", "Unknown", inplace=True)
bully["Obese"].unique()
bully
bully.describe()
# #### The data seems to be cleaned with no outliers and no duplicates and no missing data. Hence, we proceed with the analysing the data and generating results from it
# ## 1. How prevalent is bullying among the surveyed students?
import matplotlib.pyplot as plt
# Count the frequency of "Yes" for each type of bullying
bullying_counts = bully[
["OnSchool_Bullying_12mo", "OffSchool_Bullying_12mo", "Cyberbullying_12mo"]
].apply(lambda x: x[x == "Yes"].count())
# Define vibrant colors for each bar
colors = ["#FF5F6D", "#FFC371", "#00B9A7"]
# Plot the bar chart
plt.bar(bullying_counts.index, bullying_counts.values, color=colors)
# Add counts on top of each bar
for i, count in enumerate(bullying_counts.values):
plt.text(i, count, str(count), ha="center", va="bottom", fontweight="bold")
# Set the labels and title
plt.xlabel("Bullying Type")
plt.ylabel("Frequency")
plt.title("Prevalence of Bullying Among Surveyed Students")
# Update x-axis tick labels
plt.xticks(range(len(bullying_counts.index)), ["On-School", "Off-School", "Cyber"])
# Show the plot
plt.show()
# #### Students are often bullied on premisis i.e; on school but bullying tends to continue off school and on the internet as well.
# ## 2. Are there any gender differences in bullying experiences?
import matplotlib.pyplot as plt
# Filter the dataset for rows where OnSchool_Bullying_12mo is 'Yes'
on_school_bullying_yes = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Count the occurrences of each gender
on_school_gender_counts = on_school_bullying_yes["Sex"].value_counts()
# Filter the dataset for rows where OffSchool_Bullying_12mo is 'Yes'
off_school_bullying_yes = bully[bully["OffSchool_Bullying_12mo"] == "Yes"]
# Count the occurrences of each gender
off_school_gender_counts = off_school_bullying_yes["Sex"].value_counts()
# Filter the dataset for rows where Cyberbullying_12mo is 'Yes'
cyber_bullying_yes = bully[bully["Cyberbullying_12mo"] == "Yes"]
# Count the occurrences of each gender
cyber_bullying_gender_counts = cyber_bullying_yes["Sex"].value_counts()
# Set the colors for males and females
colors = ["#FF5F6D", "#00B9A7"]
# Plot the stacked bar chart
plt.bar(
["On-School", "Off-School", "Cyber"],
[
on_school_gender_counts["Male"],
off_school_gender_counts["Male"],
cyber_bullying_gender_counts["Male"],
],
color=colors[0],
label="Male",
)
plt.bar(
["On-School", "Off-School", "Cyber"],
[
on_school_gender_counts["Female"],
off_school_gender_counts["Female"],
cyber_bullying_gender_counts["Female"],
],
bottom=[
on_school_gender_counts["Male"],
off_school_gender_counts["Male"],
cyber_bullying_gender_counts["Male"],
],
color=colors[1],
label="Female",
)
# Add count values for males
for i, count in enumerate(
[
on_school_gender_counts["Male"],
off_school_gender_counts["Male"],
cyber_bullying_gender_counts["Male"],
]
):
plt.text(
i,
count / 2,
str(count),
ha="center",
va="center",
color="white",
fontweight="bold",
)
# Add count values for females
for i, count in enumerate(
[
on_school_gender_counts["Female"],
off_school_gender_counts["Female"],
cyber_bullying_gender_counts["Female"],
]
):
plt.text(
i,
count / 2 + on_school_gender_counts["Male"],
str(count),
ha="center",
va="center",
color="white",
fontweight="bold",
)
# Set the labels and title
plt.xlabel("Bullying Type")
plt.ylabel("Count")
plt.title("Gender Differences in Bullying Experiences")
# Add a legend
plt.legend()
# Show the plot
plt.show()
# #### From the above plot, it could be seen that the victims are higher if they are 'Females' in all the categories. In cyber-bullying, it's almost double than that of Male victims.
# ## 3. How does the number of close friends relate to the feeling of loneliness due to bullying?
import matplotlib.pyplot as plt
import numpy as np
# Filter the dataset for rows where bullying is 'Yes'
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Define colors for the bar graph
bar_colors = ["tab:blue", "tab:orange", "tab:green", "tab:red", "tab:purple"]
# Define colors for the line graphs
line_colors = ["tab:gray", "tab:cyan", "tab:pink", "tab:olive"]
# Count the occurrences of each value in 'Felt_Lonely' column
loneliness_counts = bullying_yes_df["Felt_lonely"].value_counts()
# Get unique values of 'Close_friends'
close_friends_values = np.sort(bullying_yes_df["Close_friends"].unique())
# Initialize a figure with two y-axes
fig, ax1 = plt.subplots()
# Plot the bar graph for 'Felt_Lonely' counts
ax1.bar(loneliness_counts.index, loneliness_counts.values, color=bar_colors)
ax1.set_xlabel("Felt Loneliness")
ax1.set_ylabel("Count", color="black")
ax1.tick_params("y", colors="black")
# Create line graphs for each value of 'Close_friends'
ax2 = ax1.twinx()
for i, close_friends_value in enumerate(close_friends_values):
close_friends_count = bullying_yes_df[
bullying_yes_df["Close_friends"] == close_friends_value
]["Felt_lonely"].value_counts()
ax2.plot(
close_friends_count.reindex(loneliness_counts.index).index,
close_friends_count.reindex(loneliness_counts.index).values,
marker="o",
label=f"Close Friends: {close_friends_value}",
color=line_colors[i % len(line_colors)],
)
ax2.set_ylabel("Number of Close Friends")
ax2.tick_params("y")
# Set the title and legends
plt.title("Count of Felt Loneliness and Number of Close Friends")
lines, labels = ax2.get_legend_handles_labels()
ax2.legend(lines, labels, loc="upper right")
# Rotate the x-axis labels vertically
plt.xticks(rotation=90)
# Adjust the layout and show the plot
fig.tight_layout()
plt.show()
# #### Victims tend to feel lonely more only sometimes and number of close friends also have the same effect in feeling lonely, for every count of friend the victim has, the more lonely a victim feels is just sometimes during a period of bullying
# ## 4. Are there any differences in the level of support from classmates and parents?
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Count the occurrences of each level of support from classmates
classmates_support_counts = bullying_yes_df["Supportive_Classmates"].value_counts()
# Count the occurrences of each level of support from parents
parents_support_counts = bullying_yes_df["Supportive_Parents"].value_counts()
# Get the unique support levels
support_levels = bullying_yes_df["Supportive_Classmates"].unique()
# Set the width of the bars
bar_width = 0.35
# Set the positions of the bars on the x-axis
r1 = range(len(support_levels))
r2 = [x + bar_width for x in r1]
# Create a grouped bar chart
plt.bar(
r1, classmates_support_counts, color="tab:blue", width=bar_width, label="Classmates"
)
plt.bar(
r2, parents_support_counts, color="tab:orange", width=bar_width, label="Parents"
)
# Set the x-axis labels and tick positions
plt.xlabel("Support Level")
plt.ylabel("Count")
plt.xticks([r + bar_width / 2 for r in range(len(support_levels))], support_levels)
# Set the title and legend
plt.title("Comparison of Support from Classmates and Parents")
plt.legend()
# Show the plot
plt.show()
# #### From the above graph, it could be said that the parents are more supportive when a child is being bullied rather than friends. The parents are more supportive when the support levels are 'Most of the time' and 'Always' which might be a good sign for a victim
# ## 5. Does persistent loneliness have an impact on school attendance?
import matplotlib.pyplot as plt
import seaborn as sns
# Filter the dataset for rows where bullying is 'Yes'
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
# Select the attendance data for students with and without persistent loneliness
attendance_with_loneliness = bullying_yes_df[
bullying_yes_df["Persistent_Loneliness"] == "Yes"
]["Days_Unexcused_Absence"]
attendance_without_loneliness = bullying_yes_df[
bullying_yes_df["Persistent_Loneliness"] == "No"
]["Days_Unexcused_Absence"]
# Combine the attendance data into a single DataFrame
attendance_data = pd.DataFrame(
{
"With Persistent Loneliness": attendance_with_loneliness,
"Without Persistent Loneliness": attendance_without_loneliness,
}
)
# Create a violin plot to compare attendance
sns.violinplot(data=attendance_data)
# Set the labels and title
plt.xlabel("Persistent Loneliness")
plt.ylabel("Days of Unexcused Absence")
plt.title("Impact of Persistent Loneliness on School Attendance")
# Show the plot
plt.show()
# #### It could be said that the loneliness might not be a factor for school attendence since a victim with and without loneliness tend to have close to 0 unexcused leaves but students without loneliness tends to be going to school regularly.
# ## 6. Is there a relationship between weight status (underweight, overweight, obese) and bullying experiences?
import matplotlib.pyplot as plt
# Filter the dataset for rows where bullying is 'Yes'
bullying_yes_df = bully[bully["OnSchool_Bullying_12mo"] == "Yes"]
on_school_counts = (
bullying_yes_df["Underweight"].value_counts(),
bullying_yes_df["Overweight"].value_counts(),
bullying_yes_df["Obese"].value_counts(),
)
bullying_yes_df = bully[bully["OffSchool_Bullying_12mo"] == "Yes"]
off_school_counts = (
bullying_yes_df["Underweight"].value_counts(),
bullying_yes_df["Overweight"].value_counts(),
bullying_yes_df["Obese"].value_counts(),
)
bullying_yes_df = bully[bully["Cyberbullying_12mo"] == "Yes"]
cyber_counts = (
bullying_yes_df["Underweight"].value_counts(),
bullying_yes_df["Overweight"].value_counts(),
bullying_yes_df["Obese"].value_counts(),
)
# Get the unique weight statuses
weight_statuses = ["On-School", "Off-School", "Cyber"]
# Set the width of the bars
bar_width = 0.3
# Set the positions of the bars on the x-axis
r1 = range(len(weight_statuses))
r2 = [x + bar_width for x in r1]
r3 = [x + 2 * bar_width for x in r1]
# Create a grouped bar chart
plt.bar(r1, on_school_counts[0], color="tab:blue", width=bar_width, label="Underweight")
plt.bar(
r2, on_school_counts[1], color="tab:orange", width=bar_width, label="Overweight"
)
plt.bar(r3, on_school_counts[2], color="tab:green", width=bar_width, label="Obese")
# Set the x-axis labels and tick positions
plt.xlabel("Bullying Type")
plt.ylabel("Count")
plt.xticks([r + bar_width for r in range(len(weight_statuses))], weight_statuses)
# Set the title and legend
plt.title("Weight Status by Bullying Type")
plt.legend()
# Show the plot
plt.show()
| false | 0 | 5,288 | 1 | 5,288 | 5,288 |
||
129280768
|
<jupyter_start><jupyter_text>Diabetic Retinopathy (resized)
# Diabetic Retinopathy Detection Competition Dataset Resized/Cropped
In this dataset, I have included both a resized version of the dataset, and a cropped then resized version of the data.
## trainLabels.csv
This file contains the name of the file under the 'image' column and the label under the 'level' column.
## resized_train:
This folder was created by simply resizing the dataset to 1024x1024 if it is bigger than this size, else it remains the same.
The code used to create this dataset is:
import glob
import os
from tqdm import tqdm
import math
from PIL import Image
files = glob.glob('D:\\Experiments with Deep Learning\\DR Kaggle\\train\\train\\train\\*.jpeg')
new_width = 1024
for i in tqdm(range(len(files))):
img = Image.open(files[i])
width,height = img.size
ratio = height/width
if width > new_width:
new_image = img.resize((new_width,math.ceil(ratio*new_width)))
else:
new_image = img
new_image.save('D:\\Experiments with Deep Learning\\DR
Kaggle\\train\\train\\resized_train\\'+os.path.basename(files[i]))
`
## resized_train_cropped:
In this case, as much of the black space is cropped out by trying to identify the center and radius of the circle of the fundus image. Some of the images turned out to be fully black or very close to fully black, and no mask was found. Hence, those images were manually removed. There may still be some noisy images remaining, however.
The code used to create this dataset is:
# import the necessary packages
import numpy as np
import cv2
import glob
import os
from tqdm import tqdm
import math
from PIL import Image
files = glob.glob('D:\\Experiments with Deep Learning\\DR Kaggle\\train\\train\\train\\*.jpeg')
new_sz = 1024
def crop_image(image):
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret,gray = cv2.threshold(gray,10,255,cv2.THRESH_BINARY)
contours,hierarchy = cv2.findContours(gray,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
if not contours:
print('no contours!')
flag = 0
return image, flag
cnt = max(contours, key=cv2.contourArea)
((x, y), r) = cv2.minEnclosingCircle(cnt)
x = int(x); y = int(y); r = int(r)
flag = 1
#print(x,y,r)
if r > 100:
return output[0 + (y-r)*int(r
Kaggle dataset identifier: diabetic-retinopathy-resized
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
np.random.seed(2)
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
sns.set(style="white", context="notebook", palette="deep")
from PIL import Image
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
data_dir = "/kaggle/input/diabetic-retinopathy-resized/resized_train/resized_train"
# print('Number of training images:', len(os.listdir(data_dir)))
cropped_data_dir = "/kaggle/input/diabetic-retinopathy-resized/resized_train_cropped/resized_train_cropped/"
# print('Number of training images:', len(os.listdir(cropped_data_dir)))
train_labels = pd.read_csv("../input/diabetic-retinopathy-resized/trainLabels.csv")
train_labels.shape
# Check the data
train_labels.isnull().any().describe()
train_labels.head()
train_labels.tail()
sample_img = Image.open(os.path.join(data_dir, os.listdir(data_dir)[0]))
print("Image size:", sample_img.size)
sample_img = Image.open(os.path.join(cropped_data_dir, os.listdir(cropped_data_dir)[0]))
print("Image size:", sample_img.size)
f, axarr = plt.subplots(2, 2)
axarr[0, 0].imshow(Image.open(os.path.join(data_dir, os.listdir(data_dir)[0])))
axarr[0, 1].imshow(
Image.open(os.path.join(cropped_data_dir, os.listdir(cropped_data_dir)[0]))
)
axarr[1, 0].imshow(Image.open(os.path.join(data_dir, os.listdir(data_dir)[1])))
axarr[1, 1].imshow(
Image.open(os.path.join(cropped_data_dir, os.listdir(cropped_data_dir)[1]))
)
# **Check Data Distribution**
widths = []
heights = []
for img_file in os.listdir(data_dir):
img = Image.open(os.path.join(data_dir, img_file))
width, height = img.size
widths.append(width)
heights.append(height)
print("Average image size:", np.mean(widths), "x", np.mean(heights))
# Check the distribution of image sizes
fig, axs = plt.subplots(1, 2, figsize=(15, 6))
axs[0].hist(widths, bins=50)
axs[0].set_xlabel("Image width")
axs[0].set_ylabel("Frequency")
axs[1].hist(heights, bins=50)
axs[1].set_xlabel("Image height")
axs[1].set_ylabel("Frequency")
plt.show()
# Check the distribution of image modes
modes = []
for img_file in os.listdir(data_dir):
img = Image.open(os.path.join(data_dir, img_file))
modes.append(img.mode)
print("Image modes:", set(modes))
# **Check for class distribution**
# labels_df = pd.read_csv('../input/diabetic-retinopathy-resized/trainLabels.csv')
train_labels["level"].hist(bins=5)
plt.xlabel("Class")
plt.ylabel("Frequency")
plt.title("Class Distribution")
plt.show()
# **Converting images into their pixel values as 1D array in CSV file**
import cv2
IMG_DIR = data_dir
# Read train_labels.csv to obtain the image name sequence
df_train = train_labels
image_sequence = df_train["image"].values
with open("eye_train.csv", "wb") as f:
for img_name in image_sequence:
img_path = os.path.join(IMG_DIR, img_name + ".jpeg")
# Process the image
img_array = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
# if img_array is not None:
img_pil = Image.fromarray(img_array)
img_28x28 = np.array(img_pil.resize((28, 28), Image.ANTIALIAS))
img_array = img_28x28.flatten()
# Normalize the images
img_array = img_array / 255.0
# Save the image pixel values to the CSV file
np.savetxt(f, img_array.reshape(1, -1), delimiter=",", header="")
# else:
# print(f"Failed to load image: {img_name}")
# import os
# os.remove("/#kaggle/working/eye_train.csv")
train_labels.image.unique().shape
df_eye_train = pd.read_csv(
"eye_train.csv", header=None, skiprows=1
) # not reading the column names
# retrieving column names as a list
column_names = pd.read_csv("eye_train.csv", nrows=1, header=None).values[0]
df_eye_train.loc[-1] = column_names # adding at index -1
df_eye_train.index = df_eye_train.index + 1 # resetting index
df_eye_train = df_eye_train.sort_index().reset_index(
drop=True
) # resetting index to start from 0
df_eye_train.tail()
df_eye_train.shape
Y_train = train_labels["level"]
Y_train[135:145]
# ## **Performing Classification through CNN**
# **Label Encoding**
Y_train = to_categorical(Y_train, num_classes=5)
Y_train[135:145]
# **Split into training and validation set**
# Set the random seed
random_seed = 2
# Reshape image i 3 dimensions (height = 28px, width = 28px , channal = 1)
df_eye_train = df_eye_train.values.reshape(-1, 28, 28, 1)
df_eye_train.shape
Y_train.shape
# Split the train and the validation set for the fitting (80:20% split)
X_train, X_val, Y_train, Y_val = train_test_split(
df_eye_train, Y_train, test_size=0.2, random_state=random_seed
)
X_train.shape
Y_train.shape
# ## **CNN**
# Set the CNN model
# my CNN architechture is In -> [[Conv2D->relu]*2 -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out
model = Sequential()
model.add(
Conv2D(
filters=32,
kernel_size=(5, 5),
padding="Same",
activation="relu",
input_shape=(28, 28, 1),
)
)
model.add(Conv2D(filters=32, kernel_size=(5, 5), padding="Same", activation="relu"))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), padding="Same", activation="relu"))
model.add(Conv2D(filters=64, kernel_size=(3, 3), padding="Same", activation="relu"))
model.add(MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(5, activation="softmax"))
# ### **Defining Optimiser and Annealer**
# Define the optimizer
optimizer = RMSprop(learning_rate=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# Compile the model
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
# Set a learning rate annealer
learning_rate_reduction = ReduceLROnPlateau(
monitor="val_acc", patience=3, verbose=1, factor=0.5, min_lr=0.00001
)
epochs = 30
batch_size = 100
# Fit the model
history = model.fit(
X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(X_val, Y_val),
)
# ## **Evaluating the Model**
# **Training and Validation curves**
# Retrieve the training and validation loss values
train_loss = history.history["loss"]
val_loss = history.history["val_loss"]
# Retrieve the training and validation accuracy values
train_acc = history.history["accuracy"]
val_acc = history.history["val_accuracy"]
# Plot the loss curves
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(train_loss, label="Training Loss")
plt.plot(val_loss, label="Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.title("Loss Curves")
plt.legend()
# Plot the accuracy curves
plt.subplot(1, 2, 2)
plt.plot(train_acc, label="Training Accuracy")
plt.plot(val_acc, label="Validation Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.title("Accuracy Curves")
plt.legend()
# Display the plot
plt.tight_layout()
plt.show()
# Look at confusion matrix
def plot_confusion_matrix(
cm, classes, normalize=False, title="Confusion matrix", cmap=plt.cm.Blues
):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation="nearest", cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j,
i,
cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black",
)
plt.tight_layout()
plt.ylabel("True label")
plt.xlabel("Predicted label")
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred, axis=1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val, axis=1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes=range(10))
# Display some error results
# Errors are difference between predicted labels and true labels
errors = Y_pred_classes - Y_true != 0
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
def display_errors(errors_index, img_errors, pred_errors, obs_errors):
"""This function shows 6 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows, ncols, sharex=True, sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row, col].imshow((img_errors[error]).reshape((28, 28)))
ax[row, col].set_title(
"Predicted label :{}\nTrue label :{}".format(
pred_errors[error], obs_errors[error]
)
)
n += 1
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors, axis=1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(
most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors
)
# ## **Performing Classification through AlexNet**
# # AlexNet
# ## For AlexNet, input size should be 227x227x3. For this, another csv should be created and passed to the model. It is extremely time-consuming. And time is running out!!!!
# Define the AlexNet model
model = Sequential()
# Layer 1: Convolutional Layer
model.add(
Conv2D(
filters=96,
kernel_size=(11, 11),
strides=(4, 4),
activation="relu",
input_shape=(227, 227, 3),
)
)
model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2)))
# Layer 2: Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(5, 5), strides=(1, 1), activation="relu"))
model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2)))
# Layer 3: Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), activation="relu"))
# Layer 4: Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), activation="relu"))
# Layer 5: Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 1), activation="relu"))
model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2)))
# Flatten the output from the previous layer
model.add(Flatten())
# Layer 6: Fully Connected Layer
model.add(Dense(units=4096, activation="relu"))
model.add(Dropout(0.5))
# Layer 7: Fully Connected Layer
model.add(Dense(units=4096, activation="relu"))
model.add(Dropout(0.5))
# Layer 8: Output Layer
model.add(Dense(units=5, activation="softmax"))
# Compile the model
model.compile(
optimizer=Adam(learning_rate=0.001),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# Train the model
model.fit(X_train, Y_train, batch_size=128, epochs=10, validation_data=(X_val, Y_val))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/280/129280768.ipynb
|
diabetic-retinopathy-resized
|
tanlikesmath
|
[{"Id": 129280768, "ScriptId": 38232837, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 972181, "CreationDate": "05/12/2023 12:15:20", "VersionNumber": 1.0, "Title": "Medical Dataset Classification through CNN & Alex", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 399.0, "LinesInsertedFromPrevious": 399.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185182010, "KernelVersionId": 129280768, "SourceDatasetVersionId": 418031}]
|
[{"Id": 418031, "DatasetId": 131128, "DatasourceVersionId": 433304, "CreatorUserId": 674553, "LicenseName": "Unknown", "CreationDate": "05/08/2019 01:48:17", "VersionNumber": 7.0, "Title": "Diabetic Retinopathy (resized)", "Slug": "diabetic-retinopathy-resized", "Subtitle": "Resized version of the Diabetic Retinopathy Kaggle competition dataset", "Description": "# Diabetic Retinopathy Detection Competition Dataset Resized/Cropped\n\nIn this dataset, I have included both a resized version of the dataset, and a cropped then resized version of the data.\n\n## trainLabels.csv\n\nThis file contains the name of the file under the 'image' column and the label under the 'level' column.\n\n\n## resized_train:\n\nThis folder was created by simply resizing the dataset to 1024x1024 if it is bigger than this size, else it remains the same.\nThe code used to create this dataset is:\n\n\n import glob\n import os\n from tqdm import tqdm\n import math\n from PIL import Image \n files = glob.glob('D:\\\\Experiments with Deep Learning\\\\DR Kaggle\\\\train\\\\train\\\\train\\\\*.jpeg')\n\n new_width = 1024\n\n for i in tqdm(range(len(files))):\n img = Image.open(files[i])\n width,height = img.size\n ratio = height/width\n if width > new_width:\n new_image = img.resize((new_width,math.ceil(ratio*new_width))) \n else:\n new_image = img\n new_image.save('D:\\\\Experiments with Deep Learning\\\\DR \n Kaggle\\\\train\\\\train\\\\resized_train\\\\'+os.path.basename(files[i]))\n`\n\n\n## resized_train_cropped:\n\nIn this case, as much of the black space is cropped out by trying to identify the center and radius of the circle of the fundus image. Some of the images turned out to be fully black or very close to fully black, and no mask was found. Hence, those images were manually removed. There may still be some noisy images remaining, however.\n\nThe code used to create this dataset is:\n\n # import the necessary packages\n import numpy as np\n import cv2\n import glob\n import os\n from tqdm import tqdm\n import math\n from PIL import Image\n files = glob.glob('D:\\\\Experiments with Deep Learning\\\\DR Kaggle\\\\train\\\\train\\\\train\\\\*.jpeg')\n\n new_sz = 1024\n\n def crop_image(image):\n output = image.copy()\n gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n ret,gray = cv2.threshold(gray,10,255,cv2.THRESH_BINARY)\n contours,hierarchy = cv2.findContours(gray,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)\n if not contours:\n print('no contours!')\n flag = 0\n return image, flag\n cnt = max(contours, key=cv2.contourArea)\n ((x, y), r) = cv2.minEnclosingCircle(cnt)\n x = int(x); y = int(y); r = int(r)\n flag = 1\n #print(x,y,r)\n if r > 100:\n return output[0 + (y-r)*int(r", "VersionNotes": "Add back the original resized trained dataset", "TotalCompressedBytes": 1299959351.0, "TotalUncompressedBytes": 7787863813.0}]
|
[{"Id": 131128, "CreatorUserId": 674553, "OwnerUserId": 674553.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 418031.0, "CurrentDatasourceVersionId": 433304.0, "ForumId": 141262, "Type": 2, "CreationDate": "03/04/2019 05:39:14", "LastActivityDate": "03/04/2019", "TotalViews": 101543, "TotalDownloads": 16544, "TotalVotes": 458, "TotalKernels": 132}]
|
[{"Id": 674553, "UserName": "tanlikesmath", "DisplayName": "ilovescience", "RegisterDate": "07/29/2016", "PerformanceTier": 4}]
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
np.random.seed(2)
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
sns.set(style="white", context="notebook", palette="deep")
from PIL import Image
import os
# for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
data_dir = "/kaggle/input/diabetic-retinopathy-resized/resized_train/resized_train"
# print('Number of training images:', len(os.listdir(data_dir)))
cropped_data_dir = "/kaggle/input/diabetic-retinopathy-resized/resized_train_cropped/resized_train_cropped/"
# print('Number of training images:', len(os.listdir(cropped_data_dir)))
train_labels = pd.read_csv("../input/diabetic-retinopathy-resized/trainLabels.csv")
train_labels.shape
# Check the data
train_labels.isnull().any().describe()
train_labels.head()
train_labels.tail()
sample_img = Image.open(os.path.join(data_dir, os.listdir(data_dir)[0]))
print("Image size:", sample_img.size)
sample_img = Image.open(os.path.join(cropped_data_dir, os.listdir(cropped_data_dir)[0]))
print("Image size:", sample_img.size)
f, axarr = plt.subplots(2, 2)
axarr[0, 0].imshow(Image.open(os.path.join(data_dir, os.listdir(data_dir)[0])))
axarr[0, 1].imshow(
Image.open(os.path.join(cropped_data_dir, os.listdir(cropped_data_dir)[0]))
)
axarr[1, 0].imshow(Image.open(os.path.join(data_dir, os.listdir(data_dir)[1])))
axarr[1, 1].imshow(
Image.open(os.path.join(cropped_data_dir, os.listdir(cropped_data_dir)[1]))
)
# **Check Data Distribution**
widths = []
heights = []
for img_file in os.listdir(data_dir):
img = Image.open(os.path.join(data_dir, img_file))
width, height = img.size
widths.append(width)
heights.append(height)
print("Average image size:", np.mean(widths), "x", np.mean(heights))
# Check the distribution of image sizes
fig, axs = plt.subplots(1, 2, figsize=(15, 6))
axs[0].hist(widths, bins=50)
axs[0].set_xlabel("Image width")
axs[0].set_ylabel("Frequency")
axs[1].hist(heights, bins=50)
axs[1].set_xlabel("Image height")
axs[1].set_ylabel("Frequency")
plt.show()
# Check the distribution of image modes
modes = []
for img_file in os.listdir(data_dir):
img = Image.open(os.path.join(data_dir, img_file))
modes.append(img.mode)
print("Image modes:", set(modes))
# **Check for class distribution**
# labels_df = pd.read_csv('../input/diabetic-retinopathy-resized/trainLabels.csv')
train_labels["level"].hist(bins=5)
plt.xlabel("Class")
plt.ylabel("Frequency")
plt.title("Class Distribution")
plt.show()
# **Converting images into their pixel values as 1D array in CSV file**
import cv2
IMG_DIR = data_dir
# Read train_labels.csv to obtain the image name sequence
df_train = train_labels
image_sequence = df_train["image"].values
with open("eye_train.csv", "wb") as f:
for img_name in image_sequence:
img_path = os.path.join(IMG_DIR, img_name + ".jpeg")
# Process the image
img_array = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
# if img_array is not None:
img_pil = Image.fromarray(img_array)
img_28x28 = np.array(img_pil.resize((28, 28), Image.ANTIALIAS))
img_array = img_28x28.flatten()
# Normalize the images
img_array = img_array / 255.0
# Save the image pixel values to the CSV file
np.savetxt(f, img_array.reshape(1, -1), delimiter=",", header="")
# else:
# print(f"Failed to load image: {img_name}")
# import os
# os.remove("/#kaggle/working/eye_train.csv")
train_labels.image.unique().shape
df_eye_train = pd.read_csv(
"eye_train.csv", header=None, skiprows=1
) # not reading the column names
# retrieving column names as a list
column_names = pd.read_csv("eye_train.csv", nrows=1, header=None).values[0]
df_eye_train.loc[-1] = column_names # adding at index -1
df_eye_train.index = df_eye_train.index + 1 # resetting index
df_eye_train = df_eye_train.sort_index().reset_index(
drop=True
) # resetting index to start from 0
df_eye_train.tail()
df_eye_train.shape
Y_train = train_labels["level"]
Y_train[135:145]
# ## **Performing Classification through CNN**
# **Label Encoding**
Y_train = to_categorical(Y_train, num_classes=5)
Y_train[135:145]
# **Split into training and validation set**
# Set the random seed
random_seed = 2
# Reshape image i 3 dimensions (height = 28px, width = 28px , channal = 1)
df_eye_train = df_eye_train.values.reshape(-1, 28, 28, 1)
df_eye_train.shape
Y_train.shape
# Split the train and the validation set for the fitting (80:20% split)
X_train, X_val, Y_train, Y_val = train_test_split(
df_eye_train, Y_train, test_size=0.2, random_state=random_seed
)
X_train.shape
Y_train.shape
# ## **CNN**
# Set the CNN model
# my CNN architechture is In -> [[Conv2D->relu]*2 -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out
model = Sequential()
model.add(
Conv2D(
filters=32,
kernel_size=(5, 5),
padding="Same",
activation="relu",
input_shape=(28, 28, 1),
)
)
model.add(Conv2D(filters=32, kernel_size=(5, 5), padding="Same", activation="relu"))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), padding="Same", activation="relu"))
model.add(Conv2D(filters=64, kernel_size=(3, 3), padding="Same", activation="relu"))
model.add(MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(5, activation="softmax"))
# ### **Defining Optimiser and Annealer**
# Define the optimizer
optimizer = RMSprop(learning_rate=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# Compile the model
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
# Set a learning rate annealer
learning_rate_reduction = ReduceLROnPlateau(
monitor="val_acc", patience=3, verbose=1, factor=0.5, min_lr=0.00001
)
epochs = 30
batch_size = 100
# Fit the model
history = model.fit(
X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(X_val, Y_val),
)
# ## **Evaluating the Model**
# **Training and Validation curves**
# Retrieve the training and validation loss values
train_loss = history.history["loss"]
val_loss = history.history["val_loss"]
# Retrieve the training and validation accuracy values
train_acc = history.history["accuracy"]
val_acc = history.history["val_accuracy"]
# Plot the loss curves
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(train_loss, label="Training Loss")
plt.plot(val_loss, label="Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.title("Loss Curves")
plt.legend()
# Plot the accuracy curves
plt.subplot(1, 2, 2)
plt.plot(train_acc, label="Training Accuracy")
plt.plot(val_acc, label="Validation Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.title("Accuracy Curves")
plt.legend()
# Display the plot
plt.tight_layout()
plt.show()
# Look at confusion matrix
def plot_confusion_matrix(
cm, classes, normalize=False, title="Confusion matrix", cmap=plt.cm.Blues
):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation="nearest", cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j,
i,
cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black",
)
plt.tight_layout()
plt.ylabel("True label")
plt.xlabel("Predicted label")
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred, axis=1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val, axis=1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes=range(10))
# Display some error results
# Errors are difference between predicted labels and true labels
errors = Y_pred_classes - Y_true != 0
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
def display_errors(errors_index, img_errors, pred_errors, obs_errors):
"""This function shows 6 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows, ncols, sharex=True, sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row, col].imshow((img_errors[error]).reshape((28, 28)))
ax[row, col].set_title(
"Predicted label :{}\nTrue label :{}".format(
pred_errors[error], obs_errors[error]
)
)
n += 1
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors, axis=1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(
most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors
)
# ## **Performing Classification through AlexNet**
# # AlexNet
# ## For AlexNet, input size should be 227x227x3. For this, another csv should be created and passed to the model. It is extremely time-consuming. And time is running out!!!!
# Define the AlexNet model
model = Sequential()
# Layer 1: Convolutional Layer
model.add(
Conv2D(
filters=96,
kernel_size=(11, 11),
strides=(4, 4),
activation="relu",
input_shape=(227, 227, 3),
)
)
model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2)))
# Layer 2: Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(5, 5), strides=(1, 1), activation="relu"))
model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2)))
# Layer 3: Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), activation="relu"))
# Layer 4: Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), activation="relu"))
# Layer 5: Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 1), activation="relu"))
model.add(MaxPool2D(pool_size=(3, 3), strides=(2, 2)))
# Flatten the output from the previous layer
model.add(Flatten())
# Layer 6: Fully Connected Layer
model.add(Dense(units=4096, activation="relu"))
model.add(Dropout(0.5))
# Layer 7: Fully Connected Layer
model.add(Dense(units=4096, activation="relu"))
model.add(Dropout(0.5))
# Layer 8: Output Layer
model.add(Dense(units=5, activation="softmax"))
# Compile the model
model.compile(
optimizer=Adam(learning_rate=0.001),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# Train the model
model.fit(X_train, Y_train, batch_size=128, epochs=10, validation_data=(X_val, Y_val))
| false | 1 | 4,044 | 1 | 4,772 | 4,044 |
||
129314340
|
<jupyter_start><jupyter_text>IP102-Dataset
### Context
Insect pest are one of the main factors affecting agricultural product. Accurate recognition and classification of insect pests can prevent huge economic losses. This dataset will play a great role in this regard.
### Content
IP02 dataset has 75,222 images and average size of 737 samples per class. The dataset has a split of 6:1:3. There are 8 super classes. Rice, Corn, Wheat, Beet, Alfalfa belong to Field Crop(FC) and Vitis, Citrus, Mango belong to Economic Crop(EC). For details {[link](https://openaccess.thecvf.com/content_CVPR_2019/html/Wu_IP102_A_Large-Scale_Benchmark_Dataset_for_Insect_Pest_Recognition_CVPR_2019_paper.html)}
Kaggle dataset identifier: ip02-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Import Necessary libreries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import (
Conv2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
BatchNormalization,
)
import warnings
warnings.filterwarnings("ignore")
import os
from matplotlib.image import imread
import random
import matplotlib.image as mpimg
tf.random.set_seed(5)
# Data agumentation
datagen = ImageDataGenerator(
rotation_range=10,
rescale=1.0 / 255.0,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
zoom_range=0.1,
shear_range=0.1,
brightness_range=[0.8, 1.2],
fill_mode="nearest",
validation_split=0.3, # set validation split to 20%
)
# Import the data into train,test and Validation subset
trainimagedata = datagen.flow_from_directory(
"/kaggle/input/ip02-dataset/classification/train",
batch_size=512,
class_mode="categorical",
target_size=(48, 48),
subset="training",
)
testimagedata = datagen.flow_from_directory(
"/kaggle/input/ip02-dataset/classification/test",
batch_size=256,
class_mode="categorical",
target_size=(48, 48),
subset="validation",
)
valimagedata = datagen.flow_from_directory(
"/kaggle/input/ip02-dataset/classification/val",
batch_size=256,
class_mode="categorical",
target_size=(48, 48),
subset="validation",
)
trainimagedata.classes
trainimagedata.class_indices
dir_path = "/kaggle/input/ip02-dataset/classification/train"
class_names = ["0", "1", "10", "100", "101", "11", "12", "13", "14", "15"]
num_classes = 9
fig, axs = plt.subplots(3, 3, figsize=(12, 6))
for i, class_name in enumerate(class_names[:num_classes]):
class_path = os.path.join(dir_path, class_name)
images = os.listdir(class_path)
image_path = os.path.join(class_path, images[0])
img = mpimg.imread(image_path)
ax = axs[i // 3, i % 3]
ax.imshow(img)
ax.axis("off")
ax.set_title(class_name)
plt.tight_layout()
plt.show()
# ## Inference
# 1) We can see that Class number 12 has Watermark hence it will affect the accuracy of the model
#
# 2) Means our dataset contain watermarkimages as we have print only sample images of 9 classes
input_shape = trainimagedata.image_shape
print(input_shape)
# Model Architecture
model = tf.keras.models.Sequential()
model.add(
tf.keras.layers.Conv2D(
128, (3, 3), input_shape=input_shape, activation="relu", padding="same"
)
)
model.add(tf.keras.layers.MaxPool2D(2, 2))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation="relu"))
model.add(tf.keras.layers.MaxPool2D(2, 2))
model.add(tf.keras.layers.Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(102, activation="softmax"))
# Set the Hyperparameter to Adam optimizer
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.01, momentum=0.9)
# Compile the model
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
from keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor="val_loss", patience=1)
# Fitting the model
mdl_history = model.fit(
trainimagedata,
validation_data=testimagedata,
epochs=2,
batch_size=256,
callbacks=[early_stop],
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/314/129314340.ipynb
|
ip02-dataset
|
rtlmhjbn
|
[{"Id": 129314340, "ScriptId": 38365791, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11219290, "CreationDate": "05/12/2023 17:24:38", "VersionNumber": 3.0, "Title": "IP102 Insect pest classification", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 148.0, "LinesInsertedFromPrevious": 65.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 83.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185245776, "KernelVersionId": 129314340, "SourceDatasetVersionId": 3132677}]
|
[{"Id": 3132677, "DatasetId": 1908726, "DatasourceVersionId": 3181768, "CreatorUserId": 6031032, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "02/03/2022 07:58:39", "VersionNumber": 1.0, "Title": "IP102-Dataset", "Slug": "ip02-dataset", "Subtitle": "A Large-Scale Benchmark Dataset for Insect Pest Recognition", "Description": "### Context\n\nInsect pest are one of the main factors affecting agricultural product. Accurate recognition and classification of insect pests can prevent huge economic losses. This dataset will play a great role in this regard.\n\n\n### Content\n\nIP02 dataset has 75,222 images and average size of 737 samples per class. The dataset has a split of 6:1:3. There are 8 super classes. Rice, Corn, Wheat, Beet, Alfalfa belong to Field Crop(FC) and Vitis, Citrus, Mango belong to Economic Crop(EC). For details {[link](https://openaccess.thecvf.com/content_CVPR_2019/html/Wu_IP102_A_Large-Scale_Benchmark_Dataset_for_Insect_Pest_Recognition_CVPR_2019_paper.html)}\n\n\n### Acknowledgements\n\nThis dataset was proposed and accepted by the authors={Xiaoping Wu and Chi Zhan and Yukun Lai and Ming-Ming Cheng and Jufeng Yang} in CVPR 2019\n\n\n### Inspiration\n\nWhat information can we gain by using deep learning from image to solve pest recognition and classification problem task", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1908726, "CreatorUserId": 6031032, "OwnerUserId": 6031032.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3132677.0, "CurrentDatasourceVersionId": 3181768.0, "ForumId": 1932161, "Type": 2, "CreationDate": "02/03/2022 07:58:39", "LastActivityDate": "02/03/2022", "TotalViews": 20721, "TotalDownloads": 2515, "TotalVotes": 47, "TotalKernels": 5}]
|
[{"Id": 6031032, "UserName": "rtlmhjbn", "DisplayName": "Ratul Mahjabin", "RegisterDate": "10/25/2020", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Import Necessary libreries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import (
Conv2D,
MaxPooling2D,
Flatten,
Dense,
Dropout,
BatchNormalization,
)
import warnings
warnings.filterwarnings("ignore")
import os
from matplotlib.image import imread
import random
import matplotlib.image as mpimg
tf.random.set_seed(5)
# Data agumentation
datagen = ImageDataGenerator(
rotation_range=10,
rescale=1.0 / 255.0,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
zoom_range=0.1,
shear_range=0.1,
brightness_range=[0.8, 1.2],
fill_mode="nearest",
validation_split=0.3, # set validation split to 20%
)
# Import the data into train,test and Validation subset
trainimagedata = datagen.flow_from_directory(
"/kaggle/input/ip02-dataset/classification/train",
batch_size=512,
class_mode="categorical",
target_size=(48, 48),
subset="training",
)
testimagedata = datagen.flow_from_directory(
"/kaggle/input/ip02-dataset/classification/test",
batch_size=256,
class_mode="categorical",
target_size=(48, 48),
subset="validation",
)
valimagedata = datagen.flow_from_directory(
"/kaggle/input/ip02-dataset/classification/val",
batch_size=256,
class_mode="categorical",
target_size=(48, 48),
subset="validation",
)
trainimagedata.classes
trainimagedata.class_indices
dir_path = "/kaggle/input/ip02-dataset/classification/train"
class_names = ["0", "1", "10", "100", "101", "11", "12", "13", "14", "15"]
num_classes = 9
fig, axs = plt.subplots(3, 3, figsize=(12, 6))
for i, class_name in enumerate(class_names[:num_classes]):
class_path = os.path.join(dir_path, class_name)
images = os.listdir(class_path)
image_path = os.path.join(class_path, images[0])
img = mpimg.imread(image_path)
ax = axs[i // 3, i % 3]
ax.imshow(img)
ax.axis("off")
ax.set_title(class_name)
plt.tight_layout()
plt.show()
# ## Inference
# 1) We can see that Class number 12 has Watermark hence it will affect the accuracy of the model
#
# 2) Means our dataset contain watermarkimages as we have print only sample images of 9 classes
input_shape = trainimagedata.image_shape
print(input_shape)
# Model Architecture
model = tf.keras.models.Sequential()
model.add(
tf.keras.layers.Conv2D(
128, (3, 3), input_shape=input_shape, activation="relu", padding="same"
)
)
model.add(tf.keras.layers.MaxPool2D(2, 2))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation="relu"))
model.add(tf.keras.layers.MaxPool2D(2, 2))
model.add(tf.keras.layers.Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(102, activation="softmax"))
# Set the Hyperparameter to Adam optimizer
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.01, momentum=0.9)
# Compile the model
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
from keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor="val_loss", patience=1)
# Fitting the model
mdl_history = model.fit(
trainimagedata,
validation_data=testimagedata,
epochs=2,
batch_size=256,
callbacks=[early_stop],
)
| false | 0 | 1,348 | 0 | 1,585 | 1,348 |
||
129314173
|
# Predicting Survival on the Titanic: A Machine Learning Case Study
# All the import libraries are imported for data analysis,machine learning and visualisation.
import pandas as pd
import numpy as np
import random as rnd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
train_df = pd.read_csv("/kaggle/input/titanic-dataset/train.csv")
train_df.head()
test_df = pd.read_csv("/kaggle/input/titanic-dataset/test.csv")
test_df.head()
train_df.info()
train_df.isna().sum()
train_df.describe(include="all")
test_df.head()
test_df.info()
test_df.isna().sum()
test_df.describe(include="all")
titanic = [train_df, test_df]
train_df[["Pclass", "Survived"]].groupby(["Pclass"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
train_df[["SibSp", "Survived"]].groupby(["SibSp"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
train_df[["Parch", "Survived"]].groupby(["Parch"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
train_df[["Sex", "Survived"]].groupby(["Sex"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
# Plotting
g = sns.FacetGrid(train_df, col="Survived")
g.map(plt.hist, "Age", bins=30)
grid = sns.FacetGrid(train_df, col="Survived", row="Pclass")
grid.map(plt.hist, "Age", alpha=0.5, bins=20)
grid.add_legend()
grid = sns.FacetGrid(train_df, col="Survived")
grid.map(sns.barplot, "Sex", "Fare", alpha=0.5, ci=None)
grid.add_legend()
# FEATURE ENGINEERING
for data in titanic:
data["Title"] = data.Name.str.extract(" ([A-Za-z]+)\.", expand=False)
pd.crosstab(train_df["Title"], train_df["Sex"])
for data in titanic:
data["Title"] = data["Title"].replace(
[
"Lady",
"Countess",
"Capt",
"Col",
"Don",
"Dr",
"Major",
"Rev",
"Sir",
"Jonkheer",
"Dona",
],
"Unknown",
)
data["Title"] = data["Title"].replace("Mlle", "Miss")
data["Title"] = data["Title"].replace("Ms", "Miss")
data["Title"] = data["Title"].replace("Mme", "Mrs")
train_df[["Title", "Survived"]].groupby(["Title"], as_index=False).mean()
title_new = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Unknown": 5}
for data in titanic:
data["Title"] = data["Title"].map(title_new)
data["Title"] = data["Title"].fillna(0)
train_df.head()
train_df = train_df.drop(["Name", "PassengerId"], axis=1)
test_df = test_df.drop(["Name"], axis=1)
titanic = [train_df, test_df]
for data in titanic:
data["Sex"] = data["Sex"].map({"female": 1, "male": 0}).astype(int)
train_df.isna().sum()
train_df["Age"] = train_df["Age"].fillna(train_df["Age"].median())
test_df["Age"] = test_df["Age"].fillna(test_df["Age"].median())
train_df["Age"] = train_df["Age"].astype(int)
test_df["Age"] = test_df["Age"].astype(int)
train_df.head()
for data in titanic:
data.loc[data["Age"] <= 16, "Age"] = 0
data.loc[(data["Age"] > 16) & (data["Age"] <= 32), "Age"] = 1
data.loc[(data["Age"] > 32) & (data["Age"] <= 48), "Age"] = 2
data.loc[(data["Age"] > 48) & (data["Age"] <= 64), "Age"] = 3
data.loc[data["Age"] > 64, "Age"]
train_df.head()
for data in titanic:
data["FamilySize"] = data["SibSp"] + data["Parch"] + 1
train_df[["FamilySize", "Survived"]].groupby(
["FamilySize"], as_index=False
).mean().sort_values(by="Survived", ascending=False)
for data in titanic:
data["Alone"] = 0
data.loc[data["FamilySize"] == 1, "Alone"] = 1
train_df[["Alone", "Survived"]].groupby(["Alone"], as_index=False).mean()
train_df = train_df.drop(["Parch", "SibSp", "FamilySize"], axis=1)
test_df = test_df.drop(["Parch", "SibSp", "FamilySize"], axis=1)
titanic = [train_df, test_df]
a = train_df.Embarked.dropna().mode()[0]
for data in titanic:
data["Embarked"] = data["Embarked"].fillna(a)
train_df[["Embarked", "Survived"]].groupby(
["Embarked"], as_index=False
).mean().sort_values(by="Survived", ascending=False)
for data in titanic:
data["Embarked"] = data["Embarked"].map({"S": 0, "C": 1, "Q": 2}).astype(int)
train_df.head()
test_df["Fare"].fillna(test_df["Fare"].dropna().median(), inplace=True)
test_df.head()
for data in titanic:
data.loc[data["Fare"] <= 8.00, "Fare"] = 0
data.loc[(data["Fare"] > 8.00) & (data["Fare"] <= 14.500), "Fare"] = 1
data.loc[(data["Fare"] > 14.500) & (data["Fare"] <= 31), "Fare"] = 2
data.loc[data["Fare"] > 31, "Fare"] = 3
data["Fare"] = data["Fare"].astype(int)
train_df.head()
train_df.isna().sum()
test_df.isna().sum()
# MODELING
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
# LOGISTIC REGRESSION
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
Y_pred
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
# CORRELATION BETWEEN FEATURE AND SURVIVED
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ["Feature"]
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by="Correlation", ascending=False)
# SUPPORT VECTOR MACHINES
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
Y_pred
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
# k-Nearest NEIGHBOURS
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
Y_pred
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
# GAUSSIAN NAIVE BAYES
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
Y_pred
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
# LINEAR SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
Y_pred
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# RANDOM FOREST
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
# DECISION TREE
#
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
Y_pred
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/314/129314173.ipynb
| null | null |
[{"Id": 129314173, "ScriptId": 38446457, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12136716, "CreationDate": "05/12/2023 17:22:37", "VersionNumber": 3.0, "Title": "Predicting Survival on the Titanic: A Machine Lea", "EvaluationDate": "05/12/2023", "IsChange": false, "TotalLines": 238.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 238.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# Predicting Survival on the Titanic: A Machine Learning Case Study
# All the import libraries are imported for data analysis,machine learning and visualisation.
import pandas as pd
import numpy as np
import random as rnd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
train_df = pd.read_csv("/kaggle/input/titanic-dataset/train.csv")
train_df.head()
test_df = pd.read_csv("/kaggle/input/titanic-dataset/test.csv")
test_df.head()
train_df.info()
train_df.isna().sum()
train_df.describe(include="all")
test_df.head()
test_df.info()
test_df.isna().sum()
test_df.describe(include="all")
titanic = [train_df, test_df]
train_df[["Pclass", "Survived"]].groupby(["Pclass"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
train_df[["SibSp", "Survived"]].groupby(["SibSp"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
train_df[["Parch", "Survived"]].groupby(["Parch"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
train_df[["Sex", "Survived"]].groupby(["Sex"], as_index=False).mean().sort_values(
by="Survived", ascending=False
)
# Plotting
g = sns.FacetGrid(train_df, col="Survived")
g.map(plt.hist, "Age", bins=30)
grid = sns.FacetGrid(train_df, col="Survived", row="Pclass")
grid.map(plt.hist, "Age", alpha=0.5, bins=20)
grid.add_legend()
grid = sns.FacetGrid(train_df, col="Survived")
grid.map(sns.barplot, "Sex", "Fare", alpha=0.5, ci=None)
grid.add_legend()
# FEATURE ENGINEERING
for data in titanic:
data["Title"] = data.Name.str.extract(" ([A-Za-z]+)\.", expand=False)
pd.crosstab(train_df["Title"], train_df["Sex"])
for data in titanic:
data["Title"] = data["Title"].replace(
[
"Lady",
"Countess",
"Capt",
"Col",
"Don",
"Dr",
"Major",
"Rev",
"Sir",
"Jonkheer",
"Dona",
],
"Unknown",
)
data["Title"] = data["Title"].replace("Mlle", "Miss")
data["Title"] = data["Title"].replace("Ms", "Miss")
data["Title"] = data["Title"].replace("Mme", "Mrs")
train_df[["Title", "Survived"]].groupby(["Title"], as_index=False).mean()
title_new = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Unknown": 5}
for data in titanic:
data["Title"] = data["Title"].map(title_new)
data["Title"] = data["Title"].fillna(0)
train_df.head()
train_df = train_df.drop(["Name", "PassengerId"], axis=1)
test_df = test_df.drop(["Name"], axis=1)
titanic = [train_df, test_df]
for data in titanic:
data["Sex"] = data["Sex"].map({"female": 1, "male": 0}).astype(int)
train_df.isna().sum()
train_df["Age"] = train_df["Age"].fillna(train_df["Age"].median())
test_df["Age"] = test_df["Age"].fillna(test_df["Age"].median())
train_df["Age"] = train_df["Age"].astype(int)
test_df["Age"] = test_df["Age"].astype(int)
train_df.head()
for data in titanic:
data.loc[data["Age"] <= 16, "Age"] = 0
data.loc[(data["Age"] > 16) & (data["Age"] <= 32), "Age"] = 1
data.loc[(data["Age"] > 32) & (data["Age"] <= 48), "Age"] = 2
data.loc[(data["Age"] > 48) & (data["Age"] <= 64), "Age"] = 3
data.loc[data["Age"] > 64, "Age"]
train_df.head()
for data in titanic:
data["FamilySize"] = data["SibSp"] + data["Parch"] + 1
train_df[["FamilySize", "Survived"]].groupby(
["FamilySize"], as_index=False
).mean().sort_values(by="Survived", ascending=False)
for data in titanic:
data["Alone"] = 0
data.loc[data["FamilySize"] == 1, "Alone"] = 1
train_df[["Alone", "Survived"]].groupby(["Alone"], as_index=False).mean()
train_df = train_df.drop(["Parch", "SibSp", "FamilySize"], axis=1)
test_df = test_df.drop(["Parch", "SibSp", "FamilySize"], axis=1)
titanic = [train_df, test_df]
a = train_df.Embarked.dropna().mode()[0]
for data in titanic:
data["Embarked"] = data["Embarked"].fillna(a)
train_df[["Embarked", "Survived"]].groupby(
["Embarked"], as_index=False
).mean().sort_values(by="Survived", ascending=False)
for data in titanic:
data["Embarked"] = data["Embarked"].map({"S": 0, "C": 1, "Q": 2}).astype(int)
train_df.head()
test_df["Fare"].fillna(test_df["Fare"].dropna().median(), inplace=True)
test_df.head()
for data in titanic:
data.loc[data["Fare"] <= 8.00, "Fare"] = 0
data.loc[(data["Fare"] > 8.00) & (data["Fare"] <= 14.500), "Fare"] = 1
data.loc[(data["Fare"] > 14.500) & (data["Fare"] <= 31), "Fare"] = 2
data.loc[data["Fare"] > 31, "Fare"] = 3
data["Fare"] = data["Fare"].astype(int)
train_df.head()
train_df.isna().sum()
test_df.isna().sum()
# MODELING
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
# LOGISTIC REGRESSION
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
Y_pred
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
# CORRELATION BETWEEN FEATURE AND SURVIVED
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ["Feature"]
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by="Correlation", ascending=False)
# SUPPORT VECTOR MACHINES
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
Y_pred
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
# k-Nearest NEIGHBOURS
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
Y_pred
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
# GAUSSIAN NAIVE BAYES
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
Y_pred
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
# LINEAR SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
Y_pred
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# RANDOM FOREST
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
# DECISION TREE
#
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
Y_pred
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
| false | 0 | 2,550 | 0 | 2,550 | 2,550 |
||
129363738
|
<jupyter_start><jupyter_text>SF Salaries
One way to understand how a city government works is by looking at who it employs and how its employees are compensated. This data contains the names, job title, and compensation for San Francisco city employees on an annual basis from 2011 to 2014.
[](https://www.kaggle.com/benhamner/d/kaggle/sf-salaries/exploring-the-sf-city-salary-data)
## Exploration Ideas
To help get you started, here are some data exploration ideas:
- How have salaries changed over time between different groups of people?
- How are base pay, overtime pay, and benefits allocated between different groups?
- Is there any evidence of pay discrimination based on gender in this dataset?
- How is budget allocated based on different groups and responsibilities?
Have other ideas you're curious for someone else to explore? Post them in [this forum thread](https://www.kaggle.com/forums/f/977/sf-salaries/t/18264/sf-salaries-dataset).
## Data Description
sf-salaries-release-*.zip (downloadable via the "Download Data" link in the header above) contains a CSV table and a SQLite database (with the same data as the CSV file). Here's the [code that creates this data release](https://github.com/benhamner/sf-salaries).
The original source for this data is [here](http://transparentcalifornia.com/salaries/san-francisco/). We've taken the raw files here and combined/normalized them into a single CSV file as well as a SQLite database with an equivalently-defined table.
Kaggle dataset identifier: sf-salaries
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Import libiraries >>
import pandas as pd
import numpy as np
import sqlite3
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# Import Data >>
database = "../input/sf-salaries/database.sqlite"
conn = sqlite3.connect(database)
data = pd.read_sql('select * from sqlite_master where type="table"', conn)
data
# Fetch Salaries table>>
Salaries = pd.read_sql("select * from Salaries", conn)
Salaries.head()
# EDA >>
Salaries["JobTitle"].nunique()
Salaries.info()
# there are rows
# contain this value 'Not Provided' in these columns
# ,so we will replace it with nan values
Salaries = Salaries.replace("Not Provided", np.nan)
Salaries.describe()
# what is the average BasePay ??
avg_basePay = pd.read_sql("select AVG(BasePay) from Salaries", conn)
avg_basePay
# what is the highest amount of the TotalPay ??
Max_Overtime = pd.read_sql("select MAX(TotalPay) from Salaries", conn)
Max_Overtime
# what is the TotalPay of ALBERT PARDINI (inclduding benefits)??
ALBERT_TotalPay = pd.read_sql(
'select TotalPayBenefits from Salaries where EmployeeName = "ALBERT PARDINI"', conn
)
ALBERT_TotalPay
# what is the name of the highest and lowest paid person??
Highest_paid = pd.read_sql(
"""select EmployeeName from Salaries where
TotalPayBenefits = (select MAX(TotalPayBenefits) from Salaries)""",
conn,
)
Highest_paid
Lowest_paid = pd.read_sql(
"""select EmployeeName from Salaries where
TotalPayBenefits = (select MIN(TotalPayBenefits) from Salaries)""",
conn,
)
Lowest_paid
Lowest_paid_info = pd.read_sql(
"""select * from Salaries where
TotalPayBenefits = (select MIN(TotalPayBenefits) from Salaries)""",
conn,
)
Lowest_paid_info
# we notic that there is employees who do not take a salary or owe to the company
# we will count their number
Owe_emp = pd.read_sql(
"select Count(Id) from Salaries where TotalPayBenefits <= 0", conn
)
Owe_emp
# what was the avarage of the TotalPay of all the employees per Year??
avg_salary_year = pd.read_sql(
"select Year,AVG(TotalPay) from Salaries GROUP BY Year", conn
)
avg_salary_year
# what are the most common jobs??
#
common_jobs = pd.read_sql(
"""select distinct(JobTitle) , Count(JobTitle) as count
from Salaries
GROUP BY JobTitle
ORDER BY count DESC
LIMIT 5""",
conn,
)
common_jobs
# How many job titles were represented by only 1 person in 2013??
one_job_title = pd.read_sql(
"""select Count(JobTitle) from (
select JobTitle , Count(JobTitle) as count
from Salaries
where Year = 2013
GROUP BY JobTitle
HAVING count = 1)""",
conn,
)
one_job_title
# How many employees have the word Chief in their job title??
Chief_emp = pd.read_sql(
'select Count(EmployeeName) from Salaries where JobTitle like "%Chief%"', conn
)
Chief_emp
# Is there is a correlation between (lenght of the job title string) and (salary)??
Salaries["titles_lenght"] = Salaries["JobTitle"].apply(len)
# apply fun is very useful when you want to perform functions and calcs on rows.
Salaries["titles_lenght"]
Salaries[["titles_lenght", "TotalPayBenefits"]].corr()
# ##### Visualizing the correlation
plt.scatter(Salaries["titles_lenght"], Salaries["TotalPayBenefits"])
plt.xlabel("Job Title Length")
plt.ylabel("Total Pay Benefits")
plt.title("Correlation Between Job Title Length and Total Pay Benefits")
plt.show()
# #### Corr is very very small ,so their is no corr
# visualizing the 5 number summary of TotalPayBenefits
plt.figure(figsize=(8, 3))
sns.boxplot(x=Salaries["TotalPayBenefits"]).set_title(
"5 number summary of TotalPayBenefits"
)
# ### Insight:
# The most of employees receive a wage with approximately 100000 per year,
# The number of employees that receive a wage with above 400000 is very small.
# so, what is the title of these people who take more than 400000??
Title = pd.read_sql(
"select JobTitle from Salaries where TotalPayBenefits > 400000", conn
)
Title
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/363/129363738.ipynb
|
sf-salaries
| null |
[{"Id": 129363738, "ScriptId": 38463785, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9435168, "CreationDate": "05/13/2023 06:39:49", "VersionNumber": 1.0, "Title": "SF Salaries-SQL-EDA-Vis", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 153.0, "LinesInsertedFromPrevious": 153.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 9}]
|
[{"Id": 185340515, "KernelVersionId": 129363738, "SourceDatasetVersionId": 827864}]
|
[{"Id": 827864, "DatasetId": 14, "DatasourceVersionId": 850525, "CreatorUserId": 2326382, "LicenseName": "CC0: Public Domain", "CreationDate": "12/05/2019 23:30:07", "VersionNumber": 5.0, "Title": "SF Salaries", "Slug": "sf-salaries", "Subtitle": "Explore San Francisco city employee salary data", "Description": "One way to understand how a city government works is by looking at who it employs and how its employees are compensated. This data contains the names, job title, and compensation for San Francisco city employees on an annual basis from 2011 to 2014.\n\n[](https://www.kaggle.com/benhamner/d/kaggle/sf-salaries/exploring-the-sf-city-salary-data)\n\n## Exploration Ideas\n\nTo help get you started, here are some data exploration ideas:\n\n - How have salaries changed over time between different groups of people?\n - How are base pay, overtime pay, and benefits allocated between different groups?\n - Is there any evidence of pay discrimination based on gender in this dataset?\n - How is budget allocated based on different groups and responsibilities?\n\nHave other ideas you're curious for someone else to explore? Post them in [this forum thread](https://www.kaggle.com/forums/f/977/sf-salaries/t/18264/sf-salaries-dataset).\n\n## Data Description\n\nsf-salaries-release-*.zip (downloadable via the \"Download Data\" link in the header above) contains a CSV table and a SQLite database (with the same data as the CSV file). Here's the [code that creates this data release](https://github.com/benhamner/sf-salaries).\n\nThe original source for this data is [here](http://transparentcalifornia.com/salaries/san-francisco/). We've taken the raw files here and combined/normalized them into a single CSV file as well as a SQLite database with an equivalently-defined table.", "VersionNotes": "Unzipped and re-uploaded files", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 14, "CreatorUserId": 993, "OwnerUserId": NaN, "OwnerOrganizationId": 4.0, "CurrentDatasetVersionId": 827864.0, "CurrentDatasourceVersionId": 850525.0, "ForumId": 977, "Type": 2, "CreationDate": "12/21/2015 19:40:00", "LastActivityDate": "02/06/2018", "TotalViews": 443682, "TotalDownloads": 69236, "TotalVotes": 805, "TotalKernels": 406}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Import libiraries >>
import pandas as pd
import numpy as np
import sqlite3
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# Import Data >>
database = "../input/sf-salaries/database.sqlite"
conn = sqlite3.connect(database)
data = pd.read_sql('select * from sqlite_master where type="table"', conn)
data
# Fetch Salaries table>>
Salaries = pd.read_sql("select * from Salaries", conn)
Salaries.head()
# EDA >>
Salaries["JobTitle"].nunique()
Salaries.info()
# there are rows
# contain this value 'Not Provided' in these columns
# ,so we will replace it with nan values
Salaries = Salaries.replace("Not Provided", np.nan)
Salaries.describe()
# what is the average BasePay ??
avg_basePay = pd.read_sql("select AVG(BasePay) from Salaries", conn)
avg_basePay
# what is the highest amount of the TotalPay ??
Max_Overtime = pd.read_sql("select MAX(TotalPay) from Salaries", conn)
Max_Overtime
# what is the TotalPay of ALBERT PARDINI (inclduding benefits)??
ALBERT_TotalPay = pd.read_sql(
'select TotalPayBenefits from Salaries where EmployeeName = "ALBERT PARDINI"', conn
)
ALBERT_TotalPay
# what is the name of the highest and lowest paid person??
Highest_paid = pd.read_sql(
"""select EmployeeName from Salaries where
TotalPayBenefits = (select MAX(TotalPayBenefits) from Salaries)""",
conn,
)
Highest_paid
Lowest_paid = pd.read_sql(
"""select EmployeeName from Salaries where
TotalPayBenefits = (select MIN(TotalPayBenefits) from Salaries)""",
conn,
)
Lowest_paid
Lowest_paid_info = pd.read_sql(
"""select * from Salaries where
TotalPayBenefits = (select MIN(TotalPayBenefits) from Salaries)""",
conn,
)
Lowest_paid_info
# we notic that there is employees who do not take a salary or owe to the company
# we will count their number
Owe_emp = pd.read_sql(
"select Count(Id) from Salaries where TotalPayBenefits <= 0", conn
)
Owe_emp
# what was the avarage of the TotalPay of all the employees per Year??
avg_salary_year = pd.read_sql(
"select Year,AVG(TotalPay) from Salaries GROUP BY Year", conn
)
avg_salary_year
# what are the most common jobs??
#
common_jobs = pd.read_sql(
"""select distinct(JobTitle) , Count(JobTitle) as count
from Salaries
GROUP BY JobTitle
ORDER BY count DESC
LIMIT 5""",
conn,
)
common_jobs
# How many job titles were represented by only 1 person in 2013??
one_job_title = pd.read_sql(
"""select Count(JobTitle) from (
select JobTitle , Count(JobTitle) as count
from Salaries
where Year = 2013
GROUP BY JobTitle
HAVING count = 1)""",
conn,
)
one_job_title
# How many employees have the word Chief in their job title??
Chief_emp = pd.read_sql(
'select Count(EmployeeName) from Salaries where JobTitle like "%Chief%"', conn
)
Chief_emp
# Is there is a correlation between (lenght of the job title string) and (salary)??
Salaries["titles_lenght"] = Salaries["JobTitle"].apply(len)
# apply fun is very useful when you want to perform functions and calcs on rows.
Salaries["titles_lenght"]
Salaries[["titles_lenght", "TotalPayBenefits"]].corr()
# ##### Visualizing the correlation
plt.scatter(Salaries["titles_lenght"], Salaries["TotalPayBenefits"])
plt.xlabel("Job Title Length")
plt.ylabel("Total Pay Benefits")
plt.title("Correlation Between Job Title Length and Total Pay Benefits")
plt.show()
# #### Corr is very very small ,so their is no corr
# visualizing the 5 number summary of TotalPayBenefits
plt.figure(figsize=(8, 3))
sns.boxplot(x=Salaries["TotalPayBenefits"]).set_title(
"5 number summary of TotalPayBenefits"
)
# ### Insight:
# The most of employees receive a wage with approximately 100000 per year,
# The number of employees that receive a wage with above 400000 is very small.
# so, what is the title of these people who take more than 400000??
Title = pd.read_sql(
"select JobTitle from Salaries where TotalPayBenefits > 400000", conn
)
Title
| false | 0 | 1,404 | 9 | 1,866 | 1,404 |
||
129363085
|
<jupyter_start><jupyter_text>Heart Disease Dataset
### Context
This data set dates from 1988 and consists of four databases: Cleveland, Hungary, Switzerland, and Long Beach V. It contains 76 attributes, including the predicted attribute, but all published experiments refer to using a subset of 14 of them. The "target" field refers to the presence of heart disease in the patient. It is integer valued 0 = no disease and 1 = disease.
### Content
Attribute Information:
> 1. age
> 2. sex
> 3. chest pain type (4 values)
> 4. resting blood pressure
> 5. serum cholestoral in mg/dl
> 6. fasting blood sugar > 120 mg/dl
> 7. resting electrocardiographic results (values 0,1,2)
> 8. maximum heart rate achieved
> 9. exercise induced angina
> 10. oldpeak = ST depression induced by exercise relative to rest
> 11. the slope of the peak exercise ST segment
> 12. number of major vessels (0-3) colored by flourosopy
> 13. thal: 0 = normal; 1 = fixed defect; 2 = reversable defect
The names and social security numbers of the patients were recently removed from the database, replaced with dummy values.
Kaggle dataset identifier: heart-disease-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('heart-disease-dataset/heart.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 1025 entries, 0 to 1024
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 1025 non-null int64
1 sex 1025 non-null int64
2 cp 1025 non-null int64
3 trestbps 1025 non-null int64
4 chol 1025 non-null int64
5 fbs 1025 non-null int64
6 restecg 1025 non-null int64
7 thalach 1025 non-null int64
8 exang 1025 non-null int64
9 oldpeak 1025 non-null float64
10 slope 1025 non-null int64
11 ca 1025 non-null int64
12 thal 1025 non-null int64
13 target 1025 non-null int64
dtypes: float64(1), int64(13)
memory usage: 112.2 KB
<jupyter_text>Examples:
{
"age": 52.0,
"sex": 1.0,
"cp": 0.0,
"trestbps": 125.0,
"chol": 212.0,
"fbs": 0.0,
"restecg": 1.0,
"thalach": 168.0,
"exang": 0.0,
"oldpeak": 1.0,
"slope": 2.0,
"ca": 2.0,
"thal": 3.0,
"target": 0.0
}
{
"age": 53.0,
"sex": 1.0,
"cp": 0.0,
"trestbps": 140.0,
"chol": 203.0,
"fbs": 1.0,
"restecg": 0.0,
"thalach": 155.0,
"exang": 1.0,
"oldpeak": 3.1,
"slope": 0.0,
"ca": 0.0,
"thal": 3.0,
"target": 0.0
}
{
"age": 70.0,
"sex": 1.0,
"cp": 0.0,
"trestbps": 145.0,
"chol": 174.0,
"fbs": 0.0,
"restecg": 1.0,
"thalach": 125.0,
"exang": 1.0,
"oldpeak": 2.6,
"slope": 0.0,
"ca": 0.0,
"thal": 3.0,
"target": 0.0
}
{
"age": 61.0,
"sex": 1.0,
"cp": 0.0,
"trestbps": 148.0,
"chol": 203.0,
"fbs": 0.0,
"restecg": 1.0,
"thalach": 161.0,
"exang": 0.0,
"oldpeak": 0.0,
"slope": 2.0,
"ca": 1.0,
"thal": 3.0,
"target": 0.0
}
<jupyter_script># # **About Dataset**
# **Content**
# This data set dates from 1988 and consists of four databases: Cleveland, Hungary, Switzerland, and Long Beach V. It contains 76 attributes, including the predicted attribute, but all published experiments refer to using a subset of 14 of them. The "target" field refers to the presence of heart disease in the patient. It is integer valued 0 = no disease and 1 = disease.
# **Feature Dataset**
# 1. Age = Age
# 2. Sex = Gender (male = 1, female = 0)
# 3. cp = Chest pain (4 points)
# 4. tresbps = resting blood pressure in mm Hg
# 5. chol = serum cholesterol in mg/dl
# 6. fbs = fasting blood sugar > 120 mg/dl (yes = 1, no = 0)
# 7. restecg = resting electrocardiography results (value 0,1,2)
# 8. thalach = maximum heart rate
# 9. exang = Exercise induced angina (yes = 1, no = 0)
# 10. oldpeak = ST exercise-induced depression relative to rest
# 11. slope = peak training ST segment slope
# 12. ca = Blood vessels that are colored after being stained by flourosopy (0-3)
# 13. thal = type of blood vessel damage 0 = normal; 1 = permanent disability; 2 = temporary disability
# 14. target = Indication of heart disease (yes = 1, no = 0)
# # **Define business problems, metrics, goals**
# - Business Problem : Berdasarkan data Organisasi Kesehatan Dunia (WHO), 85% kematian di dunia disebabkan oleh stroke dan serangan jantung. Oleh karena itu, penting untuk memprediksi risiko seseorang terkena penyakit jantung agar dapat melakukan intervensi dini untuk mencegah atau mengurangi risiko terkena penyakit jantung. Sehingga, dengan adanya data heart disease ini akan dapat digunakan untuk analisis terkait dengan memprediksi apakah seseorang terkena penyakit jantung atau tidak.
# - Tujuan: Membuat model klasifikasi yang akurat untuk memprediksi kemungkinan seseorang terkena penyakit jantung. Model ini diharapkan dapat membantu dalam pencegahan dan pengobatan penyakit jantung, sehingga dapat mengurangi risiko terkena penyakit jatung.
# - Metrik: Berdasarkan tujuan nya yaitu membuat model klasifikasi untuk memprediksi apakah seseorang terkena penyakit jantung atau tidak maka metrics yang dapat digunakan dalam melihat tingkat keberhasilan model adalah dengan akurasi model klasifikasi. Selain itu, ada beberapa metrik tambahan yang dapat digunakan dalam analisis data Heart Disease dengan atribut, seperti precision, recall, F1-score, dan area under curve (AUC). Metrik-metrik ini dapat digunakan untuk mengukur performa model klasifikasi secara lebih komprehensif dan memberikan informasi tambahan tentang performa model pada setiap kelas yang diprediksi.
# # **The workflow that can be performed by a data scientist in handling the heart disease dataset case can be summarized in Crisp-DM**
# Alur kerja yang dapat dilakukan sebagai seorang data scientist dalam menangani case dataset heart disease dapat dirangkum dalam Crisp-DM, yaitu sebagai berikut:
# 1. Understanding the Business Problem
# - Menentukan business problem dan tujuan bisnis
# - Mengidentifikasi sumber data yang diperlukan
# - Menentukan kriteria keberhasilan/ Metrics
# 2. Data Understanding
# - Mengumpulkan dan memahami data heart disease yang tersedia, seperti jumlah fitur, jumlah sampel, tipe data, dan sebagainya.
# - Melakukan eksplorasi data untuk memahami karakteristik datanya, seperti melihat distribusi, korelasi antar fitur, dan sebagainya.
# - Melakukan eksplorasi data lebih lanjut dengan menghitung statistik deskriptif, seperti mean, median, dan modus, untuk setiap variabel.
# 3. Data Preparation
# - Membersihkan data dari missing value, duplikat, data yang tidak relevan, dan menangani data outlier.
# - Mengubah tipe data jika diperlukan, seperti mengubah data nominal ke binary.
# - Melakukan feature engineering, yaitu membuat fitur baru yang dapat membantu dalam analisis data, seperti menggabungkan beberapa fitur atau mengekstrak fitur baru dari data mentah.
# - Memilih fitur yang paling relevan untuk digunakan dalam model prediksi.
# 4. Modeling
# - Memilih model yang paling sesuai untuk kasus klasifikasi data heart disease, seperti Decision Tree.
# - Membagi data menjadi data pelatihan dan data pengujian untuk mengevaluasi performa model.
# - Melatih model menggunakan data pelatihan dan mengevaluasi performa model menggunakan data pengujian.
# - Membuat visualisasi untuk memahami distribusi dari masing-masing variabel dan melihat korelasi antara variabel.
# 5. Evaluation
# - Mengukur performa model menggunakan beberapa metrik, seperti akurasi, presisi, recall, F1-score, atau area under curve (AUC).
# - Menentukan apakah model sudah cukup baik atau masih perlu dilakukan peningkatan.
# 6. Deployment
# - Mengimplementasikan model dalam produksi atau digunakan oleh stakeholder yang membutuhkan informasi mengenai risiko terkena penyakit jantung.
# 7. Monitoring
# - Melakukan monitoring terhadap performa model untuk mengetahui apakah model masih berfungsi dengan baik atau tidak, dan melakukan update model jika diperlukan.
# # **Import Library and Dataset**
# Import Library
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# create a dataframe
data = pd.read_csv("/kaggle/input/heart-disease-dataset/heart.csv")
# Call Data Table
data.head()
# #Feature Dataset
# 1. Age = Age
# 2. Sex = Gender (male = 1, female = 0)
# 3. cp = Chest pain (4 points)
# 4. tresbps = resting blood pressure in mm Hg
# 5. chol = serum cholesterol in mg/dl
# 6. fbs = fasting blood sugar > 120 mg/dl (yes = 1, no = 0)
# 7. restecg = resting electrocardiography results (value 0,1,2)
# 8. thalach = maximum heart rate
# 9. exang = Exercise induced angina (yes = 1, no = 0)
# 10. oldpeak = ST exercise-induced depression relative to rest
# 11. slope = peak training ST segment slope
# 12. ca = Blood vessels that are colored after being stained by flourosopy (0-3)
# 13. thal = type of blood vessel damage 0 = normal; 1 = permanent disability; 2 = temporary disability
# 14. target = Indication of heart disease (yes = 1, no = 0)
# Info Dataset
data.info()
# # **Check Quality of The Missing Value, Duplicated, Outlier, Imbalance**
# Check Missing Value
data.isnull().sum()
# **Interpretasi** :
# Data menunjukkan tidak adanya missing value.
# Check Duplicated
data.duplicated().sum()
# **Interpretasi** : Data menunjukkan adanya duplicate. Namun setelah di lihat kembali pada data, banyak data kategorik yang bernilai binary sehingga mengakibatkan kondisi data terdeteksi terduplicate
# Check Outliers
fig, axs = plt.subplots(ncols=len(data.columns), figsize=(25, 5))
for i, col in enumerate(data.columns):
axs[i].boxplot(data[col])
axs[i].set_title(col)
plt.show()
# **Interpretasi :**
# Pada Boxplot diatas menunjukkan adanya outliers pada beberapa feature yaitu :
# - Trestbps
# - Chol
# - Fbs
# - Thalach
# - Oldpeak
# - Ca
# - Thal
# Check Imbalance
data.hist(figsize=(15, 10), bins=10)
plt.show()
# **Interpretasi** :
# Histrogram tiap feature diatas menunjukkan bahwa hanya feature target yang memiliki kondisi balance, sedangkan feature lainnya terlihat imbalance.
# # **Check the descriptive statistics of the dataset (mean, distributions, etc)**
# Distribution
data.describe()
# Melihat histogram dari feature yang bernilai numerik pada data Heart Disease
data[["age", "trestbps", "chol", "thalach", "oldpeak"]].hist(figsize=(15, 10), bins=10)
# Melihat histogram dari feature yang masuk dalam data kategorik pada data Heart Disease
data[
[
"sex",
"cp",
"fbs",
"restecg",
"exang",
"slope",
"ca",
"thal",
"target",
]
].hist(figsize=(15, 10), bins=10)
# **Interpretation:**
# **Distribution**
# * Right Skewed (Mean > Median): There are some features with right-skewed distribution such as trestbps, chol, oldpeak.
# * Left Skewed (Mean < Median): There are some features with left-skewed distribution such as age, thalach.
# Meanwhile, for categorical data, we can see the mode (most frequent value) of the data.
# * Sex: Indicates a mode of 1, which means that there are more male patients than female patients.
# * Cp: Indicates a mode of 0, which means that the most common type of chest pain felt by patients is typical angina.
# * Fbs: Indicates a mode of 0, which means that most patients have a fasting blood sugar level of less than 120 mg/dL.
# * Restecg: Indicates a mode of 1, which means that the majority of patients have abnormal ST-T wave changes on resting electrocardiographic results.
# * Exang: Indicates a mode of 0, which means that the majority of patients did not experience exercise-induced angina.
# * Slope: Indicates a mode of 1, which means that the majority of patients have a slowly upsloping ST segment during peak exercise.
# * Ca: Indicates a mode of 0, which means that there is a small likelihood of narrowing or damage to the major blood vessels.
# * Thal: Indicates a mode of 2, which means that most patients have moderate thalassemia category.
# * Target: Indicates a mode of 1, which means that based on the available features, most patients have heart disease.
#
# Melihat Kemiringan Feature Feature Numerik
data[["age", "trestbps", "chol", "thalach", "oldpeak"]].skew()
# **Interpretation:**
# Here is an interpretation of the numerical data distribution in the Heart Disease dataset:
# * The Age feature shows a left-skewed distribution.
# * The Trestbps feature shows a right-skewed distribution.
# * The Chol feature shows a right-skewed distribution.
# * The Thalach feature shows a left-skewed distribution.
# * The Oldpeak feature shows a right-skewed distribution.
# # **Check the correlation between features**
# Check Correlation
corr_matrix = data.corr()
corr_matrix["target"].sort_values(ascending=False)
# Check Correlation with Heatmap
corr = data.corr()
plt.figure(figsize=(12, 10))
sns.heatmap(corr, annot=True, cmap="coolwarm")
plt.show()
# **Interpretation:**
# * cp has a correlation of 0.434854, which means cp has a moderate positive correlation with the target since 0.3 < correlation value < 0.7.
# * thalach has a correlation of 0.422895, which means thalach has a moderate positive correlation with the target since 0.3 < correlation value < 0.7.
# * slope has a correlation of 0.345512, which means slope has a moderate positive correlation with the target since 0.3 < correlation value < 0.7.
# * restecg has a correlation of 0.134468, which means restecg has a weak positive correlation with the target since -0.3 < correlation value < 0.3.
# * fbs has a correlation of -0.041164, which means fbs has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * chol has a correlation of -0.099966, which means chol has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * trestbps has a correlation of -0.138772, which means trestbps has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * age has a correlation of -0.229324, which means age has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * sex has a correlation of -0.279501, which means sex has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * thal has a correlation of -0.337838, which means thal has a strong negative correlation with the target since the correlation value is less than -0.7.
# * ca has a correlation of -0.382085, which means ca has a strong negative correlation with the target since the correlation value is less than -0.7.
# * exang has a correlation of -0.438029, which means exang has a strong negative correlation with the target since the correlation value is less than -0.7.
# * oldpeak has a correlation of -0.438441, which means oldpeak has a strong negative correlation with the target since the correlation value is less than -0.7.
# # Univariate Selection For categorical Variable
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
bestfeatures = SelectKBest(score_func=chi2)
x = data.drop("target", axis=1)
y = data["target"]
fit = bestfeatures.fit(x, y)
scores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(x.columns)
featureScores = pd.concat([dfcolumns, scores], axis=1)
featureScores.columns = ["Label", "Score"]
featureScores.sort_values(by="Score", ascending=False)
# Droping the features which are not correlated
data.drop(["fbs", "restecg"], axis=1, inplace=True)
data.head()
# Pairplot of each variable
sns.pairplot(
data=data,
vars=[
"age",
"sex",
"cp",
"trestbps",
"chol",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"target",
],
)
# # **Define possible feature engineering through encoding**
# Feature Extraction
data["heart_age"] = (
data["age"]
+ ((data["trestbps"] - 120) / 10)
+ ((data["chol"] - 200) / 10)
+ (0.5 * (data["oldpeak"] - 1))
)
data
# **Interpretasi :**
# **Heart Age:** This new feature can help in understanding a patient's heart age. The feature calculates a person's heart age based on certain variables in the heart disease data, such as age, blood pressure (trestbps), cholesterol level (chol), and physical activity (oldpeak). If a person's heart age is older than their biological age, it may indicate that they are at a higher risk of developing heart disease.
# Feature Scalling
x = data.drop("target", axis=1)
x
y = data["target"]
y
# Encoder
data["sex"].value_counts()
# **Interpretation:**
# The feature "sex" is a categorical feature with two categories: 0 for male and 1 for female. In the heart disease dataset, the categorical values have been encoded using label encoding, where the categories are mapped to numerical values (in this case, 0 and 1).
# Encoder
data["exang"].value_counts()
# **Interpretation:**
# The exang feature is a categorical feature that has only two categories with category 0 (not experiencing exercise-induced angina) and 1 (experiencing exercise-induced angina). Since the data shown on the exang feature in the heart disease dataset is already numeric in one column, this is the result of label encoding.
# Encoder
data["slope"].value_counts()
# **Interpretation:**
# The slope feature is a nominal categorical feature that does not have meaningful order/levels, with categories 0 (Downsloping ST segment), 1 (Flat ST segment), 2 (Upsloping ST segment). Next, one hot encoding will be applied to the slope categories.
new_slope = pd.get_dummies(data["slope"], prefix="slope")
new_slope
# Encode
data["cp"].value_counts()
# **Interpretation:**
# The feature cp is a nominal categorical feature that does not have a meaningful order/level, with categories 0 (No chest pain (asymptomatic)), 1 (Typical angina chest pain), 2 (Atypical angina chest pain), 3 (Non-anginal chest pain). Therefore, for further analysis, one hot encoding will be performed on the cp category.
new_cp = pd.get_dummies(data["cp"], prefix="chestPain")
new_cp
data["ca"].value_counts()
# **Interpretation:**
# The feature Ca is an ordinal categorical feature, ranging from 0 to 4, each representing the number of major blood vessels visible on the examination. The data shown in the Ca feature of the heart disease dataset is numeric in a single column, indicating that it is the result of label encoding.
data["thal"].value_counts()
# **Interpretation:**
# The thal feature is an ordinal categorical feature with categories 0 (No thalassemia), 1 (Mild thalassemia), 2 (Moderate thalassemia), 3 (Severe thalassemia). Since the data shown in the thal feature in the heart disease dataset is already in numerical values in one column, this is the result of label encoding.
data["target"].value_counts()
# **Interpretation:**
# The feature target is a categorical feature with categories 0 (No heart disease) and 1 (Has heart disease). Since the data shown in the target feature in the heart disease dataset is already numerical in one column, this is the result of label encoding.
app = [data, new_slope, new_cp]
df = pd.concat(app, axis=1)
df.head()
df.columns
df.drop(["cp", "slope"], axis=1, inplace=True)
df
df.shape
# Feature Scalling
from sklearn.preprocessing import MinMaxScaler, StandardScaler
sc = StandardScaler()
x_numerik = df[{"age", "trestbps", "oldpeak", "chol", "thalach", "heart_age"}]
encoder = df[
{
"sex",
"exang",
"ca",
"thal",
"target",
"slope_0",
"slope_1",
"slope_2",
"chestPain_0",
"chestPain_1",
"chestPain_2",
"chestPain_3",
}
]
x_numerik
encoder
Data = sc.fit_transform(x_numerik)
X_numerik = pd.DataFrame(Data, columns=x_numerik.columns)
X_numerik
app1 = [X_numerik, encoder]
df1 = pd.concat(app1, axis=1)
df1
X = df1.drop("target", axis=1)
X
y = df1["target"]
y
# # **Automate EDA through Dataprep, Autovis, Dtale, etc.**
# Dataprep
from dataprep.eda import create_report
create_report(df).show()
# # **Model Selection**
# * Random Forest Classifier: This model is chosen because it can handle complex datasets with many features such as the heart disease dataset. Additionally, random forest also addresses the problem of overfitting by building many decision trees and combining the prediction results of each tree.
# * Gradient Boosting Classifier: This model is a powerful classification model and is highly suitable for the heart disease dataset. The Gradient Boosting Classifier can handle imbalanced data and overfitting problems. Additionally, this model can optimize performance by adding experience from previous iterations and identifying important features in the dataset.
# * K-Nearest Neighbors Classifier: This model is a distance-based classification model. It classifies a data point based on its distance to its nearest neighbors. The advantage of this model is that it is easy to use and has the ability to handle complex data. It is suitable for use on the heart disease dataset because each data point in this dataset requires its nearest neighbors to be analyzed. However, this model does not have built-in features to extract important features, as the algorithm inherently does not calculate feature weights. Therefore, it is not possible to create feature importance with the KNN model.
# # **Cross Validation and Bootstrapping.**
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
accuracy_score,
precision_score,
recall_score,
f1_score,
roc_auc_score,
classification_report,
)
from sklearn.model_selection import cross_val_score
from sklearn.utils import resample
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.neighbors import KNeighborsClassifier
df1
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# # **Random Forest Classifier Model**
# Cross Validation
# Create a Random Forest Classifier model
model_1 = RandomForestClassifier(n_estimators=100, random_state=42)
scores_1 = cross_val_score(model_1, X_train, y_train, cv=5)
print(
"Accuracy with cross-validation: %.2f with standard deviation %.2f"
% (scores_1.mean(), scores_1.std())
)
# **Interpretation:**
# The output obtained using the training data with the Random Forest Classifier model is the mean score accuracy of cross-validation, which is 0.98, and the standard deviation score is 0.02.
# Boostrapping
# Use bootstrapping to estimate the accuracy of the model
n_bootstraps = 100
accuracies = []
for i in range(n_bootstraps):
# Sample the data with replacement
X_boot, y_boot = resample(X_train, y_train)
# Train the model on the bootstrap sample
model_1.fit(X_boot, y_boot)
# Evaluate the model on the entire dataset
accuracy = model_1.score(X_train, y_train)
accuracies.append(accuracy)
# Calculate the mean and confidence interval of the accuracies
mean_accuracy_1 = np.mean(accuracies)
std_accuracy_1 = np.std(accuracies)
lower_ci = mean_accuracy_1 - 1.96 * std_accuracy_1
upper_ci = mean_accuracy_1 + 1.96 * std_accuracy_1
# Print the results
print("Mean accuracy: %.2f" % mean_accuracy_1)
print("95%% confidence interval: [%.2f, %.2f]" % (lower_ci, upper_ci))
# **Interpretation:**
# The output obtained using the training data with the Random Forest Classifier model is the mean score accuracy from bootstrapping of 0.99, with a 95% confidence interval, the accuracy score obtained is [0.98, 1.00].
# Prediction
model_1.fit(X_train, y_train)
y_pred1 = model_1.predict(X_test)
# Compute the accuracy of the predictions
acc = accuracy_score(y_test, y_pred1)
precision = precision_score(y_test, y_pred1)
recall = recall_score(y_test, y_pred1)
f1 = f1_score(y_test, y_pred1)
roc_auc1 = roc_auc_score(y_test, model_1.predict_proba(X_test)[:, 1])
# Evaluation
print("Accuracy:", acc)
print("Precision:", precision)
print("Recall:", recall)
print("F1-score:", f1)
print("ROC:", roc_auc1)
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_test, y_pred1)
print("Confusion Matrix:\n", conf_matrix)
# # **Gradient Boosting Classifier Model**
# Cross Validation
model_2 = GradientBoostingClassifier(random_state=42)
scores_2 = cross_val_score(model_2, X, y, cv=5)
print(
"Accuracy with cross-validation: %.2f with standard deviation %.2f"
% (scores_2.mean(), scores_2.std())
)
# **Interpretation:**
# The output obtained using the training data with the Gradient Boosting Classifier model is the mean accuracy score of cross-validation, which is 0.97, and its standard deviation score is 0.01.
# Boostrapping
# Use bootstrapping to estimate the accuracy of the model
n_bootstraps = 100
accuracies = []
for i in range(n_bootstraps):
# Sample the data with replacement
X_boot, y_boot = resample(X_train, y_train)
# Train the model on the bootstrap sample
model_2.fit(X_boot, y_boot)
# Evaluate the model on the entire dataset
accuracy = model_2.score(X_train, y_train)
accuracies.append(accuracy)
# Calculate the mean and confidence interval of the accuracies
mean_accuracy_2 = np.mean(accuracies)
std_accuracy_2 = np.std(accuracies)
lower_ci = mean_accuracy_2 - 1.96 * std_accuracy_2
upper_ci = mean_accuracy_2 + 1.96 * std_accuracy_2
# Print the results
print("Mean accuracy: %.2f" % mean_accuracy_2)
print("95%% confidence interval: [%.2f, %.2f]" % (lower_ci, upper_ci))
# **Interpretation:**
# The output obtained using the training data with the Gradient Boosting Classifier model shows a mean score accuracy of 0.97 and a standard deviation score of 0.01 for cross-validation. Furthermore, the mean score accuracy from bootstrapping is 0.97 with a 95% confidence interval for the score accuracy of [0.96, 0.99].
# Prediction
model_2.fit(X_train, y_train)
y_pred2 = model_2.predict(X_test)
# Compute the accuracy of the predictions
acc = accuracy_score(y_test, y_pred2)
precision = precision_score(y_test, y_pred2)
recall = recall_score(y_test, y_pred2)
f1 = f1_score(y_test, y_pred2)
roc_auc1 = roc_auc_score(y_test, model_2.predict_proba(X_test)[:, 1])
# menampilkan hasil evaluasi
print("Accuracy:", acc)
print("Precision:", precision)
print("Recall:", recall)
print("F1-score:", f1)
print("ROC:", roc_auc1)
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_test, y_pred2)
print("Confusion Matrix:\n", conf_matrix)
# # **KNN Classifier Model**
# Cross Validation
model_3 = KNeighborsClassifier(n_neighbors=3)
scores_3 = cross_val_score(model_3, X, y, cv=5)
print(
"Accuracy with cross-validation: %.2f with standard deviation %.2f"
% (scores_3.mean(), scores_3.std())
)
# **Interpretation:**
# The output obtained using the train data with the K-Nearest Neighbors (KNN) model is a mean accuracy score of 0.93 from cross-validation and a standard deviation score of 0.01.
# Boostrapping
# Use bootstrapping to estimate the accuracy of the model
n_bootstraps = 100
accuracies = []
for i in range(n_bootstraps):
# Sample the data with replacement
X_boot, y_boot = resample(X_train, y_train)
# Train the model on the bootstrap sample
model_3.fit(X_boot, y_boot)
# Evaluate the model on the entire dataset
accuracy = model_3.score(X_train, y_train)
accuracies.append(accuracy)
# Calculate the mean and confidence interval of the accuracies
mean_accuracy_3 = np.mean(accuracies)
std_accuracy_3 = np.std(accuracies)
lower_ci = mean_accuracy_3 - 1.96 * std_accuracy_3
upper_ci = mean_accuracy_3 + 1.96 * std_accuracy_3
# Print the results
print("Mean accuracy: %.2f" % mean_accuracy_3)
print("95%% confidence interval: [%.2f, %.2f]" % (lower_ci, upper_ci))
# Prediction
model_3.fit(X_train, y_train)
y_pred3 = model_3.predict(X_test)
# Compute the accuracy of the predictions
acc = accuracy_score(y_test, y_pred3)
precision = precision_score(y_test, y_pred3)
recall = recall_score(y_test, y_pred3)
f1 = f1_score(y_test, y_pred3)
roc_auc1 = roc_auc_score(y_test, model_3.predict_proba(X_test)[:, 1])
# menampilkan hasil evaluasi
print("Accuracy:", acc)
print("Precision:", precision)
print("Recall:", recall)
print("F1-score:", f1)
print("ROC:", roc_auc1)
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_test, y_pred3)
print("Confusion Matrix:\n", conf_matrix)
# **Interpretation:**
# The output obtained using the training data with the KNN model is a mean score accuracy of 0.93 from cross-validation and a standard deviation score of 0.01. Additionally, from bootstrapping, the mean score accuracy is 0.95 and the 95% confidence interval score accuracy is [0.93, 0.97].
# # **List The Model Evaluation**
import pandas as pd
acc = accuracy_score(y_test, y_pred2)
precision = precision_score(y_test, y_pred2)
recall = recall_score(y_test, y_pred2)
f1 = f1_score(y_test, y_pred2)
roc_auc1 = roc_auc_score(y_test, model_2.predict_proba(X_test)[:, 1])
# List of evaluation models
eval_list = [
{
"model": "Random Forest Classifier",
" Mean Cross Validation": scores_1.mean(),
"Std Cross Validation": scores_1.std(),
"Mean Score Bootstrapping": mean_accuracy_1,
"95% confidence interval": [
mean_accuracy_1 - 1.96 * std_accuracy_1,
mean_accuracy_1 + 1.96 * std_accuracy_1,
],
"accuracy": accuracy_score(y_test, y_pred1),
"precision": precision_score(y_test, y_pred1),
"recall": recall_score(y_test, y_pred1),
"f1_score": f1_score(y_test, y_pred1),
"confusion metrics": confusion_matrix(y_test, y_pred1),
},
{
"model": "Random Gradient Boosting",
" Mean Cross Validation": scores_2.mean(),
"Std Cross Validation": scores_2.std(),
"Mean Score Bootstrapping": mean_accuracy_2,
"95% confidence interval": [
mean_accuracy_2 - 1.96 * std_accuracy_2,
mean_accuracy_2 + 1.96 * std_accuracy_2,
],
"accuracy": accuracy_score(y_test, y_pred2),
"precision": precision_score(y_test, y_pred2),
"recall": recall_score(y_test, y_pred2),
"f1_score": f1_score(y_test, y_pred2),
"confusion metrics": confusion_matrix(y_test, y_pred2),
},
{
"model": "KNN Classfier",
" Mean Cross Validation": scores_3.mean(),
"Std Cross Validation": scores_3.std(),
"Mean Score Bootstrapping": mean_accuracy_3,
"95% confidence interval": [
mean_accuracy_3 - 1.96 * std_accuracy_3,
mean_accuracy_3 + 1.96 * std_accuracy_3,
],
"accuracy": accuracy_score(y_test, y_pred3),
"precision": precision_score(y_test, y_pred3),
"recall": recall_score(y_test, y_pred3),
"f1_score": f1_score(y_test, y_pred3),
"confusion metrics": confusion_matrix(y_test, y_pred3),
},
]
# Dataframe of list evaluation models
eval_df = pd.DataFrame(eval_list)
# Print
eval_df
# # **List Feature Importance dari setiap Model**
# Model Random Forest
importances = pd.Series(model_1.feature_importances_, index=X_train.columns)
importances.nlargest(10).plot(kind="barh")
plt.show()
# Model Gradient Boosting
importances = pd.Series(model_2.feature_importances_, index=X_train.columns)
importances.nlargest(10).plot(kind="barh")
plt.show()
# **Selected Model:**
# Based on the generated output, the model with the best accuracy, precision, recall, and f1_score, as well as not overfitting by producing mean score accuracy and score std on cross-validation, and producing the best mean score accuracy with a 95% confidence interval, which means that the prediction value is within that confidence interval, is the Random Forest Classifier model. This model is chosen for the purpose of this study, which is to be used for accurate and effective classification to diagnose heart disease in patients.
# Meanwhile, the Gradient Boosting and KNN models experienced overfitting, where the performance of the model with test data was smaller/worse than the performance of the model using train data.
# **Feature Importance:**
# * From the Random Forest Classifier model, there are 10 features that have the most influence in the model, these features are chestpain_0, thal, oldpeak, ca, thalach, heart_age, chol, trestbps, age, and exang.
# * From the Gradient Boosting Classifier model, there are 10 features that have the most influence in the model, these features are chestpain_0, ca, thal, oldpeak, heart_age, thalach, chol, age, trestbps, and slope_0.
# * From the feature importance information from each model, it can help us to understand the factors that are most influential in predicting the target variable in each model. Thus, we can focus our efforts on these features in improving or enhancing the model that has been created.
# # **Hyperparameter Tuning**
# # **Random Forest**
# Fit the model on train data
model1 = RandomForestClassifier(random_state=42)
model1.fit(X_train, y_train)
# Evaluate the model on the test data
y_pred_1 = model1.predict(X_test)
print(f1_score(y_test, y_pred_1))
# **Grid Search**
from sklearn.model_selection import GridSearchCV
# Definisikan parameter grid yang akan di-tune
hyperparameter_space1 = {
"n_estimators": [25, 50, 100],
"criterion": ["gini", "entropy"],
"class_weight": ["balanced", "balanced_subsample"],
"min_samples_split": [0.1, 0.5, 1.0],
}
# melakukan Grid Search untuk mencari kombinasi terbaik dari hyperparameter
clf1 = GridSearchCV(
model1, hyperparameter_space1, scoring="f1", cv=5, n_jobs=-1, refit=True, verbose=2
)
# Run the Grid Search CV
clf1.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", clf1.best_params_)
clf1.best_params_, clf1.best_score_
# get the best hyperparameters
best_params1 = clf1.best_params_
# use the best hyperparameters to create a model
best_model1 = RandomForestClassifier(**best_params1, random_state=42)
# fit the model to the training data
best_model1.fit(X_train, y_train)
# predict the test data
y_pred1 = best_model1.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred1)
roc_auc = roc_auc_score(y_test, y_pred1)
f1 = f1_score(y_test, y_pred1)
print("Accuracy:", accuracy_score(y_test, y_pred1))
print("Best Hyperparameters:", best_params1)
print("f1-score:", f1_score(y_test, y_pred1))
print("ROC AUC Score:", roc_auc)
# **Random Search**
from scipy.stats import randint, truncnorm
from sklearn.model_selection import RandomizedSearchCV
model_1 = RandomForestClassifier(random_state=42)
hyperparameter_space01 = {
"n_estimators": [25, 50, 100],
"criterion": ["gini", "entropy"],
"class_weight": ["balanced", "balanced_subsample"],
"min_samples_split": [0.1, 0.5, 1.0],
}
# melakukan Randomized Search untuk mencari kombinasi terbaik dari hyperparameter
random_search01 = RandomizedSearchCV(
model_1,
hyperparameter_space01,
scoring="f1",
cv=5,
n_jobs=-1,
refit=True,
verbose=2,
)
random_search01.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", random_search01.best_params_)
# Run the Grid Search CV
random_search01.fit(X_train, y_train)
random_search01.best_params_, random_search01.best_score_
# get the best hyperparameters
best_params01 = random_search01.best_params_
# use the best hyperparameters to create a model
best_model01 = RandomForestClassifier(**best_params01, random_state=42)
# fit the model to the training data
best_model01.fit(X_train, y_train)
# predict the test data
y_pred01 = best_model01.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred01)
roc_auc = roc_auc_score(y_test, y_pred01)
f1 = f1_score(y_test, y_pred01)
print("Accuracy:", accuracy_score(y_test, y_pred01))
print("Best Hyperparameters:", best_params01)
print("f1-score:", f1_score(y_test, y_pred01))
print("ROC AUC Score:", roc_auc)
# **Learning Curve**
from sklearn.model_selection import learning_curve
# plot learning curve
train_sizes, train_scores, test_scores = learning_curve(
best_model01,
X_train,
y_train,
cv=5,
scoring="accuracy",
n_jobs=-1,
train_sizes=np.linspace(0.1, 1.0, 10),
shuffle=True,
random_state=42,
)
# plot the mean training and test scores
plt.plot(train_sizes, np.mean(train_scores, axis=1), label="Train")
plt.plot(train_sizes, np.mean(test_scores, axis=1), label="Test")
# plot the standard deviation of training and test scores
plt.fill_between(
train_sizes,
np.mean(train_scores, axis=1) - np.std(train_scores, axis=1),
np.mean(train_scores, axis=1) + np.std(train_scores, axis=1),
alpha=0.1,
)
plt.fill_between(
train_sizes,
np.mean(test_scores, axis=1) - np.std(test_scores, axis=1),
np.mean(test_scores, axis=1) + np.std(test_scores, axis=1),
alpha=0.1,
)
# plot details
plt.title("Random Forest Classifier - Learning Curve")
plt.xlabel("Training Size")
plt.ylabel("Accuracy Score")
plt.legend(loc="best")
plt.show()
# ***
# **Insight**
# ***
# Terdapat penurunan pada data latih, namun terdapat kenaikan sedikit pada data validasi. Penurunan pada data latih menunjukkan bahwa model terlalu spesifik dan telah mempelajari pola yang sangat khusus pada data latih, sehingga tidak dapat memprediksi data baru dengan akurat. Namun, karena terdapat kenaikan sedikit pada data validasi, model tersebut masih memiliki kemampuan untuk melakukan generalisasi pada data baru.
# Gap yang kecil antara learning curve menunjukkan bahwa perbedaan antara akurasi pada data latih dan data validasi tidak terlalu signifikan, namun akurasi yang dihasilkan masih rendah. Hal ini menunjukkan bahwa model masih memiliki kekurangan dan perlu diperbaiki untuk menghasilkan prediksi yang lebih akurat pada data baru. Solusi untuk mengatasi hal ini dapat dilakukan dengan melakukan regularisasi pada model atau memilih model yang lebih sederhana, serta memilih fitur yang lebih relevan pada dataset.
# **ROC Curve**
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, y_pred01)
# calculate AUC
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC Curve (area = %0.2f)" % roc_auc)
plt.plot([0, 1], [0, 1], "k--", label="Random Guess")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend()
plt.show()
from sklearn.metrics import f1_score
# calculate F1-score for each threshold
f1_scores = [f1_score(y_test, (y_pred01 >= t).astype(int)) for t in thresholds]
# find optimal threshold
optimal_idx = np.argmax(f1_scores)
optimal_threshold = thresholds[optimal_idx]
print("Optimal threshold: ", optimal_threshold)
# ***
# **Insight**
# ***
# - Grafik ROC curve menunjukkan trade-off antara True Positive Rate (TPR) dan False Positive Rate (FPR) pada berbagai threshold yang berbeda.
# - Semakin dekat dengan titik (0,1) atau sudut kiri atas, maka semakin baik performa model, seperti yang ditunjukkan pada ROC curve diatas, namun pada curve diatas belum sangat dekat dengan titik (0,1).
# - Area Under Curve (AUC) adalah ukuran dari luas area di bawah kurva ROC. Semakin besar nilai AUC, semakin baik performa model.
# - Nilai AUC = 0.82, maka memiliki performa model yang baik dalam membedakan kelas positif dan negatif.
# - Threshold optimal yang diperoleh yaitu 1, yang mana nilai ini yang memberikan keseimbangan antara TPR dan FPR atau dengan kata lain, threshold yang memberikan nilai TPR dan FPR yang seimbang pada performa model.
# # **Gradient Boosting**
# Fit the model on train data
model2 = GradientBoostingClassifier(random_state=42)
model2.fit(X_train, y_train)
# Evaluate the model on the test data
y_pred_2 = model2.predict(X_test)
print(f1_score(y_test, y_pred_2))
# **Grid Search**
# Create the parameter grid
hyperparameter_space2 = {
"n_estimators": [100, 500],
"learning_rate": [0.01, 0.1],
"max_depth": [3, 5],
"criterion": ["squared_error", "friedman_mse"],
}
# melakukan Grid Search untuk mencari kombinasi terbaik dari hyperparameter
clf2 = GridSearchCV(model2, param_grid=hyperparameter_space2, cv=5)
# Run the Grid Search CV
clf2.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", clf2.best_params_)
clf2.best_params_, clf2.best_score_
# get the best hyperparameters
best_params2 = clf2.best_params_
# use the best hyperparameters to create a model
best_model2 = GradientBoostingClassifier(**best_params2, random_state=42)
# fit the model to the training data
best_model2.fit(X_train, y_train)
# predict the test data
y_pred2 = best_model2.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred2)
roc_auc = roc_auc_score(y_test, y_pred2)
f1 = f1_score(y_test, y_pred2)
print("Accuracy:", accuracy_score(y_test, y_pred2))
print("Best Hyperparameters:", best_params2)
print("f1-score:", f1_score(y_test, y_pred2))
print("ROC AUC Score:", roc_auc)
# **Random Search**
model_2 = GradientBoostingClassifier(random_state=42)
hyperparameter_space02 = {
"n_estimators": [100, 500],
"learning_rate": [0.01, 0.1],
"max_depth": [3, 5],
"criterion": ["squared_error", "friedman_mse"],
}
# melakukan Randomized Search untuk mencari kombinasi terbaik dari hyperparameter
random_search02 = RandomizedSearchCV(
model_2,
hyperparameter_space02,
scoring="f1",
cv=5,
n_jobs=-1,
refit=True,
verbose=2,
)
random_search02.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", random_search02.best_params_)
# Run the Grid Search CV
random_search02.fit(X_train, y_train)
random_search02.best_params_, random_search02.best_score_
# get the best hyperparameters
best_params02 = random_search02.best_params_
# use the best hyperparameters to create a model
best_model02 = GradientBoostingClassifier(**best_params02, random_state=42)
# fit the model to the training data
best_model02.fit(X_train, y_train)
# predict the test data
y_pred02 = best_model02.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred02)
roc_auc = roc_auc_score(y_test, y_pred02)
f1 = f1_score(y_test, y_pred02)
print("Accuracy:", accuracy_score(y_test, y_pred02))
print("Best Hyperparameters:", best_params02)
print("f1-score:", f1_score(y_test, y_pred02))
print("ROC AUC Score:", roc_auc)
# predict the test data
y_pred02 = best_model02.predict(X_test)
y_pred02
# **Learning Curve**
from sklearn.model_selection import learning_curve
# Create the learning curve
train_sizes, train_scores, test_scores = learning_curve(
best_model02, X, y, cv=5, train_sizes=np.linspace(0.1, 1.0, 10), scoring="accuracy"
)
# Calculate the mean and standard deviation of the training scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Calculate the mean and standard deviation of the test scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot the learning curve
plt.plot(train_sizes, train_mean, label="Training score")
plt.plot(train_sizes, test_mean, label="Cross-validation score")
# Add the standard deviation bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std, alpha=0.1)
plt.fill_between(train_sizes, test_mean - test_std, test_mean + test_std, alpha=0.1)
# Add labels and legend
plt.xlabel("Number of training samples")
plt.ylabel("Accuracy score")
plt.title("Learning Curve (Gradient Boosting Classifier)")
plt.legend(loc="best")
# Show the plot
plt.show()
# ***
# **Insight**
# ***
# Terdapat kenaikan pada learning curve data validasi dan learning curve data latih stabil dengan akurasi yang tinggi, maka model tersebut menunjukkan bahwa model tersebut cukup baik dan mampu melakukan generalisasi pada data baru dengan akurat.
# Kenaikan pada learning curve data validasi menunjukkan bahwa model dapat mempelajari pola-pola umum pada dataset dan mampu melakukan prediksi dengan akurat pada data yang belum pernah dilihat sebelumnya. Sedangkan learning curve data latih yang stabil menunjukkan bahwa model tidak terlalu overfitting pada data latih dan mampu melakukan prediksi dengan akurat pada data yang sudah dikenal sebelumnya.
# **ROC Curve**
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, y_pred02)
# calculate AUC
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC Curve (area = %0.2f)" % roc_auc)
plt.plot([0, 1], [0, 1], "k--", label="Random Guess")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend()
plt.show()
from sklearn.metrics import f1_score
# calculate F1-score for each threshold
f1_scores = [f1_score(y_test, (y_pred02 >= t).astype(int)) for t in thresholds]
# find optimal threshold
optimal_idx = np.argmax(f1_scores)
optimal_threshold = thresholds[optimal_idx]
print("Optimal threshold: ", optimal_threshold)
# ***
# **Insight**
# ***
# - Grafik ROC curve menunjukkan trade-off antara True Positive Rate (TPR) dan False Positive Rate (FPR) pada berbagai threshold yang berbeda.
# - Semakin dekat dengan titik (0,1) atau sudut kiri atas, maka semakin baik performa model, seperti yang ditunjukkan pada ROC curve diatas.
# - Area Under Curve (AUC) adalah ukuran dari luas area di bawah kurva ROC. Semakin besar nilai AUC, semakin baik performa model.
# - Nilai AUC = 0.99, maka memiliki performa model yang baik dalam membedakan kelas positif dan negatif.
# - Threshold optimal yang diperoleh yaitu 1, yang mana nilai ini yang memberikan keseimbangan antara TPR dan FPR atau dengan kata lain, threshold yang memberikan nilai TPR dan FPR yang seimbang pada performa model.
# # **KNN Classifier**
# **Grid Search**
# Fit the model on train data
model3 = KNeighborsClassifier()
model3.fit(X_train, y_train)
# Evaluate the model on the test data
y_pred_3 = model3.predict(X_test)
print(f1_score(y_test, y_pred_3))
# Create the parameter grid
hyperparameter_space3 = {
"n_neighbors": [3, 5, 7, 9, 11],
"weights": ["uniform", "distance"],
"algorithm": ["ball_tree", "kd_tree", "brute"],
}
# melakukan Grid Search untuk mencari kombinasi terbaik dari hyperparameter
clf3 = GridSearchCV(model3, param_grid=hyperparameter_space3, cv=5)
# Run the Grid Search CV
clf3.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", clf3.best_params_)
clf3.best_params_, clf3.best_score_
# get the best hyperparameters
best_params3 = clf3.best_params_
# use the best hyperparameters to create a model
best_model3 = KNeighborsClassifier(**best_params3)
# fit the model to the training data
best_model3.fit(X_train, y_train)
# predict the test data
y_pred3 = best_model3.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred3)
roc_auc = roc_auc_score(y_test, y_pred3)
f1 = f1_score(y_test, y_pred3)
print("Accuracy:", accuracy_score(y_test, y_pred3))
print("Best Hyperparameters:", best_params3)
print("f1-score:", f1_score(y_test, y_pred3))
print("ROC AUC Score:", roc_auc)
# **Random Search**
model_3 = KNeighborsClassifier()
hyperparameter_space03 = {
"n_neighbors": randint(1, 50), # jumlah tetangga terdekat
"weights": ["uniform", "distance"], # bobot jarak tetangga terdekat
"algorithm": [
"ball_tree",
"kd_tree",
"brute",
], # algoritma pencarian tetangga terdekat
"leaf_size": randint(1, 100), # ukuran daun untuk algoritma ball_tree atau kd_tree
}
# melakukan Randomized Search untuk mencari kombinasi terbaik dari hyperparameter
random_search03 = RandomizedSearchCV(
model_3,
hyperparameter_space03,
scoring="f1",
cv=5,
random_state=42,
n_jobs=-1,
refit=True,
verbose=2,
)
random_search03.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", random_search03.best_params_)
# Run the Grid Search CV
random_search03.fit(X_train, y_train)
random_search03.best_params_, random_search03.best_score_
# get the best hyperparameters
best_params03 = random_search03.best_params_
# use the best hyperparameters to create a model
best_model03 = KNeighborsClassifier(**best_params03)
# fit the model to the training data
best_model03.fit(X_train, y_train)
# predict the test data
y_pred03 = best_model03.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred03)
roc_auc = roc_auc_score(y_test, y_pred03)
f1 = f1_score(y_test, y_pred03)
print("Accuracy:", accuracy_score(y_test, y_pred03))
print("Best Hyperparameters:", best_params03)
print("f1-score:", f1_score(y_test, y_pred03))
print("ROC AUC Score:", roc_auc_score(y_test, y_pred03))
# **Learning Curve**
from sklearn.model_selection import learning_curve
# Create the learning curve
train_sizes, train_scores, test_scores = learning_curve(
best_model03, X, y, cv=5, train_sizes=np.linspace(0.1, 1.0, 10), scoring="accuracy"
)
# Calculate the mean and standard deviation of the training scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Calculate the mean and standard deviation of the test scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot the learning curve
plt.plot(train_sizes, train_mean, label="Training score")
plt.plot(train_sizes, test_mean, label="Cross-validation score")
# Add the standard deviation bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std, alpha=0.1)
plt.fill_between(train_sizes, test_mean - test_std, test_mean + test_std, alpha=0.1)
# Add labels and legend
plt.xlabel("Number of training samples")
plt.ylabel("Accuracy score")
plt.title("Learning Curve (KNN Classifier)")
plt.legend(loc="best")
# Show the plot
plt.show()
# ***
# **Insight**
# ***
# Terdapat kenaikan pada learning curve data validasi dan learning curve data latih stabil dengan akurasi yang tinggi, maka model tersebut menunjukkan bahwa model tersebut cukup baik dan mampu melakukan generalisasi pada data baru dengan akurat.
# Kenaikan pada learning curve data validasi menunjukkan bahwa model dapat mempelajari pola-pola umum pada dataset dan mampu melakukan prediksi dengan akurat pada data yang belum pernah dilihat sebelumnya. Sedangkan learning curve data latih yang stabil menunjukkan bahwa model tidak terlalu overfitting pada data latih dan mampu melakukan prediksi dengan akurat pada data yang sudah dikenal sebelumnya.
# **ROC Curve**
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, y_pred03)
# calculate AUC
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC Curve (area = %0.2f)" % roc_auc)
plt.plot([0, 1], [0, 1], "k--", label="Random Guess")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend()
plt.show()
from sklearn.metrics import f1_score
# calculate F1-score for each threshold
f1_scores = [f1_score(y_test, (y_pred03 >= t).astype(int)) for t in thresholds]
# find optimal threshold
optimal_idx = np.argmax(f1_scores)
optimal_threshold = thresholds[optimal_idx]
print("Optimal threshold: ", optimal_threshold)
# ***
# **Insight**
# ***
# - Grafik ROC curve menunjukkan trade-off antara True Positive Rate (TPR) dan False Positive Rate (FPR) pada berbagai threshold yang berbeda.
# - Semakin dekat dengan titik (0,1) atau sudut kiri atas, maka semakin baik performa model, seperti yang ditunjukkan pada ROC curve diatas.
# - Area Under Curve (AUC) adalah ukuran dari luas area di bawah kurva ROC. Semakin besar nilai AUC, semakin baik performa model.
# - Nilai AUC = 0.99, maka memiliki performa model yang baik dalam membedakan kelas positif dan negatif.
# - Threshold optimal yang diperoleh yaitu 1, yang mana nilai ini yang memberikan keseimbangan antara TPR dan FPR atau dengan kata lain, threshold yang memberikan nilai TPR dan FPR yang seimbang pada performa model.
# # **Interpretasi Pemilihan Model**
# Membuat list hasil evaluasi model berdasarkan model yang diperoleh melalui hyperparameter tuning
eval_list = [
{
"model": "Random Forest Classifier",
" F1-Score_train": random_search01.best_score_,
"Accuracy_Test": accuracy_score(y_test, y_pred01),
"Best Hyperparameters": best_params01,
"F1-Score_Test": f1_score(y_test, y_pred01),
"ROC AUC Score": roc_auc_score(y_test, y_pred01),
},
{
"model": "Gradient Boosting Classifier",
" F1-Score_train": random_search02.best_score_,
"Accuracy_Test": accuracy_score(y_test, y_pred02),
"Best Hyperparameters": best_params02,
"F1-Score_Test": f1_score(y_test, y_pred02),
"ROC AUC Score": roc_auc_score(y_test, y_pred02),
},
{
"model": "KNN Classifier",
" F1-Score_train": random_search03.best_score_,
"Accuracy_Test": accuracy_score(y_test, y_pred03),
"Best Hyperparameters": best_params03,
"F1-Score_Test": f1_score(y_test, y_pred03),
"ROC AUC Score": roc_auc_score(y_test, y_pred03),
},
]
# Membuat dataframe dari list evaluasi model
eval_df = pd.DataFrame(eval_list)
# Menampilkan dataframe hasil evaluasi model
eval_df
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/363/129363085.ipynb
|
heart-disease-dataset
|
johnsmith88
|
[{"Id": 129363085, "ScriptId": 38463808, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10798005, "CreationDate": "05/13/2023 06:32:20", "VersionNumber": 1.0, "Title": "Heart Disease Prediction : A ML Approach", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 1241.0, "LinesInsertedFromPrevious": 1241.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185339324, "KernelVersionId": 129363085, "SourceDatasetVersionId": 477177}]
|
[{"Id": 477177, "DatasetId": 216167, "DatasourceVersionId": 493143, "CreatorUserId": 3308439, "LicenseName": "Unknown", "CreationDate": "06/06/2019 15:33:55", "VersionNumber": 2.0, "Title": "Heart Disease Dataset", "Slug": "heart-disease-dataset", "Subtitle": "Public Health Dataset", "Description": "### Context\n\nThis data set dates from 1988 and consists of four databases: Cleveland, Hungary, Switzerland, and Long Beach V. It contains 76 attributes, including the predicted attribute, but all published experiments refer to using a subset of 14 of them. The \"target\" field refers to the presence of heart disease in the patient. It is integer valued 0 = no disease and 1 = disease.\n\n\n### Content\n\nAttribute Information: \n> 1. age \n> 2. sex \n> 3. chest pain type (4 values) \n> 4. resting blood pressure \n> 5. serum cholestoral in mg/dl \n> 6. fasting blood sugar > 120 mg/dl\n> 7. resting electrocardiographic results (values 0,1,2)\n> 8. maximum heart rate achieved \n> 9. exercise induced angina \n> 10. oldpeak = ST depression induced by exercise relative to rest \n> 11. the slope of the peak exercise ST segment \n> 12. number of major vessels (0-3) colored by flourosopy \n> 13. thal: 0 = normal; 1 = fixed defect; 2 = reversable defect\nThe names and social security numbers of the patients were recently removed from the database, replaced with dummy values.", "VersionNotes": "Update data", "TotalCompressedBytes": 38114.0, "TotalUncompressedBytes": 38114.0}]
|
[{"Id": 216167, "CreatorUserId": 3308439, "OwnerUserId": 3308439.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 477177.0, "CurrentDatasourceVersionId": 493143.0, "ForumId": 227259, "Type": 2, "CreationDate": "06/04/2019 02:58:45", "LastActivityDate": "06/04/2019", "TotalViews": 578180, "TotalDownloads": 97361, "TotalVotes": 747, "TotalKernels": 308}]
|
[{"Id": 3308439, "UserName": "johnsmith88", "DisplayName": "David Lapp", "RegisterDate": "06/04/2019", "PerformanceTier": 0}]
|
# # **About Dataset**
# **Content**
# This data set dates from 1988 and consists of four databases: Cleveland, Hungary, Switzerland, and Long Beach V. It contains 76 attributes, including the predicted attribute, but all published experiments refer to using a subset of 14 of them. The "target" field refers to the presence of heart disease in the patient. It is integer valued 0 = no disease and 1 = disease.
# **Feature Dataset**
# 1. Age = Age
# 2. Sex = Gender (male = 1, female = 0)
# 3. cp = Chest pain (4 points)
# 4. tresbps = resting blood pressure in mm Hg
# 5. chol = serum cholesterol in mg/dl
# 6. fbs = fasting blood sugar > 120 mg/dl (yes = 1, no = 0)
# 7. restecg = resting electrocardiography results (value 0,1,2)
# 8. thalach = maximum heart rate
# 9. exang = Exercise induced angina (yes = 1, no = 0)
# 10. oldpeak = ST exercise-induced depression relative to rest
# 11. slope = peak training ST segment slope
# 12. ca = Blood vessels that are colored after being stained by flourosopy (0-3)
# 13. thal = type of blood vessel damage 0 = normal; 1 = permanent disability; 2 = temporary disability
# 14. target = Indication of heart disease (yes = 1, no = 0)
# # **Define business problems, metrics, goals**
# - Business Problem : Berdasarkan data Organisasi Kesehatan Dunia (WHO), 85% kematian di dunia disebabkan oleh stroke dan serangan jantung. Oleh karena itu, penting untuk memprediksi risiko seseorang terkena penyakit jantung agar dapat melakukan intervensi dini untuk mencegah atau mengurangi risiko terkena penyakit jantung. Sehingga, dengan adanya data heart disease ini akan dapat digunakan untuk analisis terkait dengan memprediksi apakah seseorang terkena penyakit jantung atau tidak.
# - Tujuan: Membuat model klasifikasi yang akurat untuk memprediksi kemungkinan seseorang terkena penyakit jantung. Model ini diharapkan dapat membantu dalam pencegahan dan pengobatan penyakit jantung, sehingga dapat mengurangi risiko terkena penyakit jatung.
# - Metrik: Berdasarkan tujuan nya yaitu membuat model klasifikasi untuk memprediksi apakah seseorang terkena penyakit jantung atau tidak maka metrics yang dapat digunakan dalam melihat tingkat keberhasilan model adalah dengan akurasi model klasifikasi. Selain itu, ada beberapa metrik tambahan yang dapat digunakan dalam analisis data Heart Disease dengan atribut, seperti precision, recall, F1-score, dan area under curve (AUC). Metrik-metrik ini dapat digunakan untuk mengukur performa model klasifikasi secara lebih komprehensif dan memberikan informasi tambahan tentang performa model pada setiap kelas yang diprediksi.
# # **The workflow that can be performed by a data scientist in handling the heart disease dataset case can be summarized in Crisp-DM**
# Alur kerja yang dapat dilakukan sebagai seorang data scientist dalam menangani case dataset heart disease dapat dirangkum dalam Crisp-DM, yaitu sebagai berikut:
# 1. Understanding the Business Problem
# - Menentukan business problem dan tujuan bisnis
# - Mengidentifikasi sumber data yang diperlukan
# - Menentukan kriteria keberhasilan/ Metrics
# 2. Data Understanding
# - Mengumpulkan dan memahami data heart disease yang tersedia, seperti jumlah fitur, jumlah sampel, tipe data, dan sebagainya.
# - Melakukan eksplorasi data untuk memahami karakteristik datanya, seperti melihat distribusi, korelasi antar fitur, dan sebagainya.
# - Melakukan eksplorasi data lebih lanjut dengan menghitung statistik deskriptif, seperti mean, median, dan modus, untuk setiap variabel.
# 3. Data Preparation
# - Membersihkan data dari missing value, duplikat, data yang tidak relevan, dan menangani data outlier.
# - Mengubah tipe data jika diperlukan, seperti mengubah data nominal ke binary.
# - Melakukan feature engineering, yaitu membuat fitur baru yang dapat membantu dalam analisis data, seperti menggabungkan beberapa fitur atau mengekstrak fitur baru dari data mentah.
# - Memilih fitur yang paling relevan untuk digunakan dalam model prediksi.
# 4. Modeling
# - Memilih model yang paling sesuai untuk kasus klasifikasi data heart disease, seperti Decision Tree.
# - Membagi data menjadi data pelatihan dan data pengujian untuk mengevaluasi performa model.
# - Melatih model menggunakan data pelatihan dan mengevaluasi performa model menggunakan data pengujian.
# - Membuat visualisasi untuk memahami distribusi dari masing-masing variabel dan melihat korelasi antara variabel.
# 5. Evaluation
# - Mengukur performa model menggunakan beberapa metrik, seperti akurasi, presisi, recall, F1-score, atau area under curve (AUC).
# - Menentukan apakah model sudah cukup baik atau masih perlu dilakukan peningkatan.
# 6. Deployment
# - Mengimplementasikan model dalam produksi atau digunakan oleh stakeholder yang membutuhkan informasi mengenai risiko terkena penyakit jantung.
# 7. Monitoring
# - Melakukan monitoring terhadap performa model untuk mengetahui apakah model masih berfungsi dengan baik atau tidak, dan melakukan update model jika diperlukan.
# # **Import Library and Dataset**
# Import Library
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# create a dataframe
data = pd.read_csv("/kaggle/input/heart-disease-dataset/heart.csv")
# Call Data Table
data.head()
# #Feature Dataset
# 1. Age = Age
# 2. Sex = Gender (male = 1, female = 0)
# 3. cp = Chest pain (4 points)
# 4. tresbps = resting blood pressure in mm Hg
# 5. chol = serum cholesterol in mg/dl
# 6. fbs = fasting blood sugar > 120 mg/dl (yes = 1, no = 0)
# 7. restecg = resting electrocardiography results (value 0,1,2)
# 8. thalach = maximum heart rate
# 9. exang = Exercise induced angina (yes = 1, no = 0)
# 10. oldpeak = ST exercise-induced depression relative to rest
# 11. slope = peak training ST segment slope
# 12. ca = Blood vessels that are colored after being stained by flourosopy (0-3)
# 13. thal = type of blood vessel damage 0 = normal; 1 = permanent disability; 2 = temporary disability
# 14. target = Indication of heart disease (yes = 1, no = 0)
# Info Dataset
data.info()
# # **Check Quality of The Missing Value, Duplicated, Outlier, Imbalance**
# Check Missing Value
data.isnull().sum()
# **Interpretasi** :
# Data menunjukkan tidak adanya missing value.
# Check Duplicated
data.duplicated().sum()
# **Interpretasi** : Data menunjukkan adanya duplicate. Namun setelah di lihat kembali pada data, banyak data kategorik yang bernilai binary sehingga mengakibatkan kondisi data terdeteksi terduplicate
# Check Outliers
fig, axs = plt.subplots(ncols=len(data.columns), figsize=(25, 5))
for i, col in enumerate(data.columns):
axs[i].boxplot(data[col])
axs[i].set_title(col)
plt.show()
# **Interpretasi :**
# Pada Boxplot diatas menunjukkan adanya outliers pada beberapa feature yaitu :
# - Trestbps
# - Chol
# - Fbs
# - Thalach
# - Oldpeak
# - Ca
# - Thal
# Check Imbalance
data.hist(figsize=(15, 10), bins=10)
plt.show()
# **Interpretasi** :
# Histrogram tiap feature diatas menunjukkan bahwa hanya feature target yang memiliki kondisi balance, sedangkan feature lainnya terlihat imbalance.
# # **Check the descriptive statistics of the dataset (mean, distributions, etc)**
# Distribution
data.describe()
# Melihat histogram dari feature yang bernilai numerik pada data Heart Disease
data[["age", "trestbps", "chol", "thalach", "oldpeak"]].hist(figsize=(15, 10), bins=10)
# Melihat histogram dari feature yang masuk dalam data kategorik pada data Heart Disease
data[
[
"sex",
"cp",
"fbs",
"restecg",
"exang",
"slope",
"ca",
"thal",
"target",
]
].hist(figsize=(15, 10), bins=10)
# **Interpretation:**
# **Distribution**
# * Right Skewed (Mean > Median): There are some features with right-skewed distribution such as trestbps, chol, oldpeak.
# * Left Skewed (Mean < Median): There are some features with left-skewed distribution such as age, thalach.
# Meanwhile, for categorical data, we can see the mode (most frequent value) of the data.
# * Sex: Indicates a mode of 1, which means that there are more male patients than female patients.
# * Cp: Indicates a mode of 0, which means that the most common type of chest pain felt by patients is typical angina.
# * Fbs: Indicates a mode of 0, which means that most patients have a fasting blood sugar level of less than 120 mg/dL.
# * Restecg: Indicates a mode of 1, which means that the majority of patients have abnormal ST-T wave changes on resting electrocardiographic results.
# * Exang: Indicates a mode of 0, which means that the majority of patients did not experience exercise-induced angina.
# * Slope: Indicates a mode of 1, which means that the majority of patients have a slowly upsloping ST segment during peak exercise.
# * Ca: Indicates a mode of 0, which means that there is a small likelihood of narrowing or damage to the major blood vessels.
# * Thal: Indicates a mode of 2, which means that most patients have moderate thalassemia category.
# * Target: Indicates a mode of 1, which means that based on the available features, most patients have heart disease.
#
# Melihat Kemiringan Feature Feature Numerik
data[["age", "trestbps", "chol", "thalach", "oldpeak"]].skew()
# **Interpretation:**
# Here is an interpretation of the numerical data distribution in the Heart Disease dataset:
# * The Age feature shows a left-skewed distribution.
# * The Trestbps feature shows a right-skewed distribution.
# * The Chol feature shows a right-skewed distribution.
# * The Thalach feature shows a left-skewed distribution.
# * The Oldpeak feature shows a right-skewed distribution.
# # **Check the correlation between features**
# Check Correlation
corr_matrix = data.corr()
corr_matrix["target"].sort_values(ascending=False)
# Check Correlation with Heatmap
corr = data.corr()
plt.figure(figsize=(12, 10))
sns.heatmap(corr, annot=True, cmap="coolwarm")
plt.show()
# **Interpretation:**
# * cp has a correlation of 0.434854, which means cp has a moderate positive correlation with the target since 0.3 < correlation value < 0.7.
# * thalach has a correlation of 0.422895, which means thalach has a moderate positive correlation with the target since 0.3 < correlation value < 0.7.
# * slope has a correlation of 0.345512, which means slope has a moderate positive correlation with the target since 0.3 < correlation value < 0.7.
# * restecg has a correlation of 0.134468, which means restecg has a weak positive correlation with the target since -0.3 < correlation value < 0.3.
# * fbs has a correlation of -0.041164, which means fbs has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * chol has a correlation of -0.099966, which means chol has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * trestbps has a correlation of -0.138772, which means trestbps has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * age has a correlation of -0.229324, which means age has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * sex has a correlation of -0.279501, which means sex has a weak negative correlation with the target since -0.3 < correlation value < 0.3.
# * thal has a correlation of -0.337838, which means thal has a strong negative correlation with the target since the correlation value is less than -0.7.
# * ca has a correlation of -0.382085, which means ca has a strong negative correlation with the target since the correlation value is less than -0.7.
# * exang has a correlation of -0.438029, which means exang has a strong negative correlation with the target since the correlation value is less than -0.7.
# * oldpeak has a correlation of -0.438441, which means oldpeak has a strong negative correlation with the target since the correlation value is less than -0.7.
# # Univariate Selection For categorical Variable
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
bestfeatures = SelectKBest(score_func=chi2)
x = data.drop("target", axis=1)
y = data["target"]
fit = bestfeatures.fit(x, y)
scores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(x.columns)
featureScores = pd.concat([dfcolumns, scores], axis=1)
featureScores.columns = ["Label", "Score"]
featureScores.sort_values(by="Score", ascending=False)
# Droping the features which are not correlated
data.drop(["fbs", "restecg"], axis=1, inplace=True)
data.head()
# Pairplot of each variable
sns.pairplot(
data=data,
vars=[
"age",
"sex",
"cp",
"trestbps",
"chol",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"target",
],
)
# # **Define possible feature engineering through encoding**
# Feature Extraction
data["heart_age"] = (
data["age"]
+ ((data["trestbps"] - 120) / 10)
+ ((data["chol"] - 200) / 10)
+ (0.5 * (data["oldpeak"] - 1))
)
data
# **Interpretasi :**
# **Heart Age:** This new feature can help in understanding a patient's heart age. The feature calculates a person's heart age based on certain variables in the heart disease data, such as age, blood pressure (trestbps), cholesterol level (chol), and physical activity (oldpeak). If a person's heart age is older than their biological age, it may indicate that they are at a higher risk of developing heart disease.
# Feature Scalling
x = data.drop("target", axis=1)
x
y = data["target"]
y
# Encoder
data["sex"].value_counts()
# **Interpretation:**
# The feature "sex" is a categorical feature with two categories: 0 for male and 1 for female. In the heart disease dataset, the categorical values have been encoded using label encoding, where the categories are mapped to numerical values (in this case, 0 and 1).
# Encoder
data["exang"].value_counts()
# **Interpretation:**
# The exang feature is a categorical feature that has only two categories with category 0 (not experiencing exercise-induced angina) and 1 (experiencing exercise-induced angina). Since the data shown on the exang feature in the heart disease dataset is already numeric in one column, this is the result of label encoding.
# Encoder
data["slope"].value_counts()
# **Interpretation:**
# The slope feature is a nominal categorical feature that does not have meaningful order/levels, with categories 0 (Downsloping ST segment), 1 (Flat ST segment), 2 (Upsloping ST segment). Next, one hot encoding will be applied to the slope categories.
new_slope = pd.get_dummies(data["slope"], prefix="slope")
new_slope
# Encode
data["cp"].value_counts()
# **Interpretation:**
# The feature cp is a nominal categorical feature that does not have a meaningful order/level, with categories 0 (No chest pain (asymptomatic)), 1 (Typical angina chest pain), 2 (Atypical angina chest pain), 3 (Non-anginal chest pain). Therefore, for further analysis, one hot encoding will be performed on the cp category.
new_cp = pd.get_dummies(data["cp"], prefix="chestPain")
new_cp
data["ca"].value_counts()
# **Interpretation:**
# The feature Ca is an ordinal categorical feature, ranging from 0 to 4, each representing the number of major blood vessels visible on the examination. The data shown in the Ca feature of the heart disease dataset is numeric in a single column, indicating that it is the result of label encoding.
data["thal"].value_counts()
# **Interpretation:**
# The thal feature is an ordinal categorical feature with categories 0 (No thalassemia), 1 (Mild thalassemia), 2 (Moderate thalassemia), 3 (Severe thalassemia). Since the data shown in the thal feature in the heart disease dataset is already in numerical values in one column, this is the result of label encoding.
data["target"].value_counts()
# **Interpretation:**
# The feature target is a categorical feature with categories 0 (No heart disease) and 1 (Has heart disease). Since the data shown in the target feature in the heart disease dataset is already numerical in one column, this is the result of label encoding.
app = [data, new_slope, new_cp]
df = pd.concat(app, axis=1)
df.head()
df.columns
df.drop(["cp", "slope"], axis=1, inplace=True)
df
df.shape
# Feature Scalling
from sklearn.preprocessing import MinMaxScaler, StandardScaler
sc = StandardScaler()
x_numerik = df[{"age", "trestbps", "oldpeak", "chol", "thalach", "heart_age"}]
encoder = df[
{
"sex",
"exang",
"ca",
"thal",
"target",
"slope_0",
"slope_1",
"slope_2",
"chestPain_0",
"chestPain_1",
"chestPain_2",
"chestPain_3",
}
]
x_numerik
encoder
Data = sc.fit_transform(x_numerik)
X_numerik = pd.DataFrame(Data, columns=x_numerik.columns)
X_numerik
app1 = [X_numerik, encoder]
df1 = pd.concat(app1, axis=1)
df1
X = df1.drop("target", axis=1)
X
y = df1["target"]
y
# # **Automate EDA through Dataprep, Autovis, Dtale, etc.**
# Dataprep
from dataprep.eda import create_report
create_report(df).show()
# # **Model Selection**
# * Random Forest Classifier: This model is chosen because it can handle complex datasets with many features such as the heart disease dataset. Additionally, random forest also addresses the problem of overfitting by building many decision trees and combining the prediction results of each tree.
# * Gradient Boosting Classifier: This model is a powerful classification model and is highly suitable for the heart disease dataset. The Gradient Boosting Classifier can handle imbalanced data and overfitting problems. Additionally, this model can optimize performance by adding experience from previous iterations and identifying important features in the dataset.
# * K-Nearest Neighbors Classifier: This model is a distance-based classification model. It classifies a data point based on its distance to its nearest neighbors. The advantage of this model is that it is easy to use and has the ability to handle complex data. It is suitable for use on the heart disease dataset because each data point in this dataset requires its nearest neighbors to be analyzed. However, this model does not have built-in features to extract important features, as the algorithm inherently does not calculate feature weights. Therefore, it is not possible to create feature importance with the KNN model.
# # **Cross Validation and Bootstrapping.**
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
accuracy_score,
precision_score,
recall_score,
f1_score,
roc_auc_score,
classification_report,
)
from sklearn.model_selection import cross_val_score
from sklearn.utils import resample
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.neighbors import KNeighborsClassifier
df1
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# # **Random Forest Classifier Model**
# Cross Validation
# Create a Random Forest Classifier model
model_1 = RandomForestClassifier(n_estimators=100, random_state=42)
scores_1 = cross_val_score(model_1, X_train, y_train, cv=5)
print(
"Accuracy with cross-validation: %.2f with standard deviation %.2f"
% (scores_1.mean(), scores_1.std())
)
# **Interpretation:**
# The output obtained using the training data with the Random Forest Classifier model is the mean score accuracy of cross-validation, which is 0.98, and the standard deviation score is 0.02.
# Boostrapping
# Use bootstrapping to estimate the accuracy of the model
n_bootstraps = 100
accuracies = []
for i in range(n_bootstraps):
# Sample the data with replacement
X_boot, y_boot = resample(X_train, y_train)
# Train the model on the bootstrap sample
model_1.fit(X_boot, y_boot)
# Evaluate the model on the entire dataset
accuracy = model_1.score(X_train, y_train)
accuracies.append(accuracy)
# Calculate the mean and confidence interval of the accuracies
mean_accuracy_1 = np.mean(accuracies)
std_accuracy_1 = np.std(accuracies)
lower_ci = mean_accuracy_1 - 1.96 * std_accuracy_1
upper_ci = mean_accuracy_1 + 1.96 * std_accuracy_1
# Print the results
print("Mean accuracy: %.2f" % mean_accuracy_1)
print("95%% confidence interval: [%.2f, %.2f]" % (lower_ci, upper_ci))
# **Interpretation:**
# The output obtained using the training data with the Random Forest Classifier model is the mean score accuracy from bootstrapping of 0.99, with a 95% confidence interval, the accuracy score obtained is [0.98, 1.00].
# Prediction
model_1.fit(X_train, y_train)
y_pred1 = model_1.predict(X_test)
# Compute the accuracy of the predictions
acc = accuracy_score(y_test, y_pred1)
precision = precision_score(y_test, y_pred1)
recall = recall_score(y_test, y_pred1)
f1 = f1_score(y_test, y_pred1)
roc_auc1 = roc_auc_score(y_test, model_1.predict_proba(X_test)[:, 1])
# Evaluation
print("Accuracy:", acc)
print("Precision:", precision)
print("Recall:", recall)
print("F1-score:", f1)
print("ROC:", roc_auc1)
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_test, y_pred1)
print("Confusion Matrix:\n", conf_matrix)
# # **Gradient Boosting Classifier Model**
# Cross Validation
model_2 = GradientBoostingClassifier(random_state=42)
scores_2 = cross_val_score(model_2, X, y, cv=5)
print(
"Accuracy with cross-validation: %.2f with standard deviation %.2f"
% (scores_2.mean(), scores_2.std())
)
# **Interpretation:**
# The output obtained using the training data with the Gradient Boosting Classifier model is the mean accuracy score of cross-validation, which is 0.97, and its standard deviation score is 0.01.
# Boostrapping
# Use bootstrapping to estimate the accuracy of the model
n_bootstraps = 100
accuracies = []
for i in range(n_bootstraps):
# Sample the data with replacement
X_boot, y_boot = resample(X_train, y_train)
# Train the model on the bootstrap sample
model_2.fit(X_boot, y_boot)
# Evaluate the model on the entire dataset
accuracy = model_2.score(X_train, y_train)
accuracies.append(accuracy)
# Calculate the mean and confidence interval of the accuracies
mean_accuracy_2 = np.mean(accuracies)
std_accuracy_2 = np.std(accuracies)
lower_ci = mean_accuracy_2 - 1.96 * std_accuracy_2
upper_ci = mean_accuracy_2 + 1.96 * std_accuracy_2
# Print the results
print("Mean accuracy: %.2f" % mean_accuracy_2)
print("95%% confidence interval: [%.2f, %.2f]" % (lower_ci, upper_ci))
# **Interpretation:**
# The output obtained using the training data with the Gradient Boosting Classifier model shows a mean score accuracy of 0.97 and a standard deviation score of 0.01 for cross-validation. Furthermore, the mean score accuracy from bootstrapping is 0.97 with a 95% confidence interval for the score accuracy of [0.96, 0.99].
# Prediction
model_2.fit(X_train, y_train)
y_pred2 = model_2.predict(X_test)
# Compute the accuracy of the predictions
acc = accuracy_score(y_test, y_pred2)
precision = precision_score(y_test, y_pred2)
recall = recall_score(y_test, y_pred2)
f1 = f1_score(y_test, y_pred2)
roc_auc1 = roc_auc_score(y_test, model_2.predict_proba(X_test)[:, 1])
# menampilkan hasil evaluasi
print("Accuracy:", acc)
print("Precision:", precision)
print("Recall:", recall)
print("F1-score:", f1)
print("ROC:", roc_auc1)
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_test, y_pred2)
print("Confusion Matrix:\n", conf_matrix)
# # **KNN Classifier Model**
# Cross Validation
model_3 = KNeighborsClassifier(n_neighbors=3)
scores_3 = cross_val_score(model_3, X, y, cv=5)
print(
"Accuracy with cross-validation: %.2f with standard deviation %.2f"
% (scores_3.mean(), scores_3.std())
)
# **Interpretation:**
# The output obtained using the train data with the K-Nearest Neighbors (KNN) model is a mean accuracy score of 0.93 from cross-validation and a standard deviation score of 0.01.
# Boostrapping
# Use bootstrapping to estimate the accuracy of the model
n_bootstraps = 100
accuracies = []
for i in range(n_bootstraps):
# Sample the data with replacement
X_boot, y_boot = resample(X_train, y_train)
# Train the model on the bootstrap sample
model_3.fit(X_boot, y_boot)
# Evaluate the model on the entire dataset
accuracy = model_3.score(X_train, y_train)
accuracies.append(accuracy)
# Calculate the mean and confidence interval of the accuracies
mean_accuracy_3 = np.mean(accuracies)
std_accuracy_3 = np.std(accuracies)
lower_ci = mean_accuracy_3 - 1.96 * std_accuracy_3
upper_ci = mean_accuracy_3 + 1.96 * std_accuracy_3
# Print the results
print("Mean accuracy: %.2f" % mean_accuracy_3)
print("95%% confidence interval: [%.2f, %.2f]" % (lower_ci, upper_ci))
# Prediction
model_3.fit(X_train, y_train)
y_pred3 = model_3.predict(X_test)
# Compute the accuracy of the predictions
acc = accuracy_score(y_test, y_pred3)
precision = precision_score(y_test, y_pred3)
recall = recall_score(y_test, y_pred3)
f1 = f1_score(y_test, y_pred3)
roc_auc1 = roc_auc_score(y_test, model_3.predict_proba(X_test)[:, 1])
# menampilkan hasil evaluasi
print("Accuracy:", acc)
print("Precision:", precision)
print("Recall:", recall)
print("F1-score:", f1)
print("ROC:", roc_auc1)
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_test, y_pred3)
print("Confusion Matrix:\n", conf_matrix)
# **Interpretation:**
# The output obtained using the training data with the KNN model is a mean score accuracy of 0.93 from cross-validation and a standard deviation score of 0.01. Additionally, from bootstrapping, the mean score accuracy is 0.95 and the 95% confidence interval score accuracy is [0.93, 0.97].
# # **List The Model Evaluation**
import pandas as pd
acc = accuracy_score(y_test, y_pred2)
precision = precision_score(y_test, y_pred2)
recall = recall_score(y_test, y_pred2)
f1 = f1_score(y_test, y_pred2)
roc_auc1 = roc_auc_score(y_test, model_2.predict_proba(X_test)[:, 1])
# List of evaluation models
eval_list = [
{
"model": "Random Forest Classifier",
" Mean Cross Validation": scores_1.mean(),
"Std Cross Validation": scores_1.std(),
"Mean Score Bootstrapping": mean_accuracy_1,
"95% confidence interval": [
mean_accuracy_1 - 1.96 * std_accuracy_1,
mean_accuracy_1 + 1.96 * std_accuracy_1,
],
"accuracy": accuracy_score(y_test, y_pred1),
"precision": precision_score(y_test, y_pred1),
"recall": recall_score(y_test, y_pred1),
"f1_score": f1_score(y_test, y_pred1),
"confusion metrics": confusion_matrix(y_test, y_pred1),
},
{
"model": "Random Gradient Boosting",
" Mean Cross Validation": scores_2.mean(),
"Std Cross Validation": scores_2.std(),
"Mean Score Bootstrapping": mean_accuracy_2,
"95% confidence interval": [
mean_accuracy_2 - 1.96 * std_accuracy_2,
mean_accuracy_2 + 1.96 * std_accuracy_2,
],
"accuracy": accuracy_score(y_test, y_pred2),
"precision": precision_score(y_test, y_pred2),
"recall": recall_score(y_test, y_pred2),
"f1_score": f1_score(y_test, y_pred2),
"confusion metrics": confusion_matrix(y_test, y_pred2),
},
{
"model": "KNN Classfier",
" Mean Cross Validation": scores_3.mean(),
"Std Cross Validation": scores_3.std(),
"Mean Score Bootstrapping": mean_accuracy_3,
"95% confidence interval": [
mean_accuracy_3 - 1.96 * std_accuracy_3,
mean_accuracy_3 + 1.96 * std_accuracy_3,
],
"accuracy": accuracy_score(y_test, y_pred3),
"precision": precision_score(y_test, y_pred3),
"recall": recall_score(y_test, y_pred3),
"f1_score": f1_score(y_test, y_pred3),
"confusion metrics": confusion_matrix(y_test, y_pred3),
},
]
# Dataframe of list evaluation models
eval_df = pd.DataFrame(eval_list)
# Print
eval_df
# # **List Feature Importance dari setiap Model**
# Model Random Forest
importances = pd.Series(model_1.feature_importances_, index=X_train.columns)
importances.nlargest(10).plot(kind="barh")
plt.show()
# Model Gradient Boosting
importances = pd.Series(model_2.feature_importances_, index=X_train.columns)
importances.nlargest(10).plot(kind="barh")
plt.show()
# **Selected Model:**
# Based on the generated output, the model with the best accuracy, precision, recall, and f1_score, as well as not overfitting by producing mean score accuracy and score std on cross-validation, and producing the best mean score accuracy with a 95% confidence interval, which means that the prediction value is within that confidence interval, is the Random Forest Classifier model. This model is chosen for the purpose of this study, which is to be used for accurate and effective classification to diagnose heart disease in patients.
# Meanwhile, the Gradient Boosting and KNN models experienced overfitting, where the performance of the model with test data was smaller/worse than the performance of the model using train data.
# **Feature Importance:**
# * From the Random Forest Classifier model, there are 10 features that have the most influence in the model, these features are chestpain_0, thal, oldpeak, ca, thalach, heart_age, chol, trestbps, age, and exang.
# * From the Gradient Boosting Classifier model, there are 10 features that have the most influence in the model, these features are chestpain_0, ca, thal, oldpeak, heart_age, thalach, chol, age, trestbps, and slope_0.
# * From the feature importance information from each model, it can help us to understand the factors that are most influential in predicting the target variable in each model. Thus, we can focus our efforts on these features in improving or enhancing the model that has been created.
# # **Hyperparameter Tuning**
# # **Random Forest**
# Fit the model on train data
model1 = RandomForestClassifier(random_state=42)
model1.fit(X_train, y_train)
# Evaluate the model on the test data
y_pred_1 = model1.predict(X_test)
print(f1_score(y_test, y_pred_1))
# **Grid Search**
from sklearn.model_selection import GridSearchCV
# Definisikan parameter grid yang akan di-tune
hyperparameter_space1 = {
"n_estimators": [25, 50, 100],
"criterion": ["gini", "entropy"],
"class_weight": ["balanced", "balanced_subsample"],
"min_samples_split": [0.1, 0.5, 1.0],
}
# melakukan Grid Search untuk mencari kombinasi terbaik dari hyperparameter
clf1 = GridSearchCV(
model1, hyperparameter_space1, scoring="f1", cv=5, n_jobs=-1, refit=True, verbose=2
)
# Run the Grid Search CV
clf1.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", clf1.best_params_)
clf1.best_params_, clf1.best_score_
# get the best hyperparameters
best_params1 = clf1.best_params_
# use the best hyperparameters to create a model
best_model1 = RandomForestClassifier(**best_params1, random_state=42)
# fit the model to the training data
best_model1.fit(X_train, y_train)
# predict the test data
y_pred1 = best_model1.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred1)
roc_auc = roc_auc_score(y_test, y_pred1)
f1 = f1_score(y_test, y_pred1)
print("Accuracy:", accuracy_score(y_test, y_pred1))
print("Best Hyperparameters:", best_params1)
print("f1-score:", f1_score(y_test, y_pred1))
print("ROC AUC Score:", roc_auc)
# **Random Search**
from scipy.stats import randint, truncnorm
from sklearn.model_selection import RandomizedSearchCV
model_1 = RandomForestClassifier(random_state=42)
hyperparameter_space01 = {
"n_estimators": [25, 50, 100],
"criterion": ["gini", "entropy"],
"class_weight": ["balanced", "balanced_subsample"],
"min_samples_split": [0.1, 0.5, 1.0],
}
# melakukan Randomized Search untuk mencari kombinasi terbaik dari hyperparameter
random_search01 = RandomizedSearchCV(
model_1,
hyperparameter_space01,
scoring="f1",
cv=5,
n_jobs=-1,
refit=True,
verbose=2,
)
random_search01.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", random_search01.best_params_)
# Run the Grid Search CV
random_search01.fit(X_train, y_train)
random_search01.best_params_, random_search01.best_score_
# get the best hyperparameters
best_params01 = random_search01.best_params_
# use the best hyperparameters to create a model
best_model01 = RandomForestClassifier(**best_params01, random_state=42)
# fit the model to the training data
best_model01.fit(X_train, y_train)
# predict the test data
y_pred01 = best_model01.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred01)
roc_auc = roc_auc_score(y_test, y_pred01)
f1 = f1_score(y_test, y_pred01)
print("Accuracy:", accuracy_score(y_test, y_pred01))
print("Best Hyperparameters:", best_params01)
print("f1-score:", f1_score(y_test, y_pred01))
print("ROC AUC Score:", roc_auc)
# **Learning Curve**
from sklearn.model_selection import learning_curve
# plot learning curve
train_sizes, train_scores, test_scores = learning_curve(
best_model01,
X_train,
y_train,
cv=5,
scoring="accuracy",
n_jobs=-1,
train_sizes=np.linspace(0.1, 1.0, 10),
shuffle=True,
random_state=42,
)
# plot the mean training and test scores
plt.plot(train_sizes, np.mean(train_scores, axis=1), label="Train")
plt.plot(train_sizes, np.mean(test_scores, axis=1), label="Test")
# plot the standard deviation of training and test scores
plt.fill_between(
train_sizes,
np.mean(train_scores, axis=1) - np.std(train_scores, axis=1),
np.mean(train_scores, axis=1) + np.std(train_scores, axis=1),
alpha=0.1,
)
plt.fill_between(
train_sizes,
np.mean(test_scores, axis=1) - np.std(test_scores, axis=1),
np.mean(test_scores, axis=1) + np.std(test_scores, axis=1),
alpha=0.1,
)
# plot details
plt.title("Random Forest Classifier - Learning Curve")
plt.xlabel("Training Size")
plt.ylabel("Accuracy Score")
plt.legend(loc="best")
plt.show()
# ***
# **Insight**
# ***
# Terdapat penurunan pada data latih, namun terdapat kenaikan sedikit pada data validasi. Penurunan pada data latih menunjukkan bahwa model terlalu spesifik dan telah mempelajari pola yang sangat khusus pada data latih, sehingga tidak dapat memprediksi data baru dengan akurat. Namun, karena terdapat kenaikan sedikit pada data validasi, model tersebut masih memiliki kemampuan untuk melakukan generalisasi pada data baru.
# Gap yang kecil antara learning curve menunjukkan bahwa perbedaan antara akurasi pada data latih dan data validasi tidak terlalu signifikan, namun akurasi yang dihasilkan masih rendah. Hal ini menunjukkan bahwa model masih memiliki kekurangan dan perlu diperbaiki untuk menghasilkan prediksi yang lebih akurat pada data baru. Solusi untuk mengatasi hal ini dapat dilakukan dengan melakukan regularisasi pada model atau memilih model yang lebih sederhana, serta memilih fitur yang lebih relevan pada dataset.
# **ROC Curve**
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, y_pred01)
# calculate AUC
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC Curve (area = %0.2f)" % roc_auc)
plt.plot([0, 1], [0, 1], "k--", label="Random Guess")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend()
plt.show()
from sklearn.metrics import f1_score
# calculate F1-score for each threshold
f1_scores = [f1_score(y_test, (y_pred01 >= t).astype(int)) for t in thresholds]
# find optimal threshold
optimal_idx = np.argmax(f1_scores)
optimal_threshold = thresholds[optimal_idx]
print("Optimal threshold: ", optimal_threshold)
# ***
# **Insight**
# ***
# - Grafik ROC curve menunjukkan trade-off antara True Positive Rate (TPR) dan False Positive Rate (FPR) pada berbagai threshold yang berbeda.
# - Semakin dekat dengan titik (0,1) atau sudut kiri atas, maka semakin baik performa model, seperti yang ditunjukkan pada ROC curve diatas, namun pada curve diatas belum sangat dekat dengan titik (0,1).
# - Area Under Curve (AUC) adalah ukuran dari luas area di bawah kurva ROC. Semakin besar nilai AUC, semakin baik performa model.
# - Nilai AUC = 0.82, maka memiliki performa model yang baik dalam membedakan kelas positif dan negatif.
# - Threshold optimal yang diperoleh yaitu 1, yang mana nilai ini yang memberikan keseimbangan antara TPR dan FPR atau dengan kata lain, threshold yang memberikan nilai TPR dan FPR yang seimbang pada performa model.
# # **Gradient Boosting**
# Fit the model on train data
model2 = GradientBoostingClassifier(random_state=42)
model2.fit(X_train, y_train)
# Evaluate the model on the test data
y_pred_2 = model2.predict(X_test)
print(f1_score(y_test, y_pred_2))
# **Grid Search**
# Create the parameter grid
hyperparameter_space2 = {
"n_estimators": [100, 500],
"learning_rate": [0.01, 0.1],
"max_depth": [3, 5],
"criterion": ["squared_error", "friedman_mse"],
}
# melakukan Grid Search untuk mencari kombinasi terbaik dari hyperparameter
clf2 = GridSearchCV(model2, param_grid=hyperparameter_space2, cv=5)
# Run the Grid Search CV
clf2.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", clf2.best_params_)
clf2.best_params_, clf2.best_score_
# get the best hyperparameters
best_params2 = clf2.best_params_
# use the best hyperparameters to create a model
best_model2 = GradientBoostingClassifier(**best_params2, random_state=42)
# fit the model to the training data
best_model2.fit(X_train, y_train)
# predict the test data
y_pred2 = best_model2.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred2)
roc_auc = roc_auc_score(y_test, y_pred2)
f1 = f1_score(y_test, y_pred2)
print("Accuracy:", accuracy_score(y_test, y_pred2))
print("Best Hyperparameters:", best_params2)
print("f1-score:", f1_score(y_test, y_pred2))
print("ROC AUC Score:", roc_auc)
# **Random Search**
model_2 = GradientBoostingClassifier(random_state=42)
hyperparameter_space02 = {
"n_estimators": [100, 500],
"learning_rate": [0.01, 0.1],
"max_depth": [3, 5],
"criterion": ["squared_error", "friedman_mse"],
}
# melakukan Randomized Search untuk mencari kombinasi terbaik dari hyperparameter
random_search02 = RandomizedSearchCV(
model_2,
hyperparameter_space02,
scoring="f1",
cv=5,
n_jobs=-1,
refit=True,
verbose=2,
)
random_search02.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", random_search02.best_params_)
# Run the Grid Search CV
random_search02.fit(X_train, y_train)
random_search02.best_params_, random_search02.best_score_
# get the best hyperparameters
best_params02 = random_search02.best_params_
# use the best hyperparameters to create a model
best_model02 = GradientBoostingClassifier(**best_params02, random_state=42)
# fit the model to the training data
best_model02.fit(X_train, y_train)
# predict the test data
y_pred02 = best_model02.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred02)
roc_auc = roc_auc_score(y_test, y_pred02)
f1 = f1_score(y_test, y_pred02)
print("Accuracy:", accuracy_score(y_test, y_pred02))
print("Best Hyperparameters:", best_params02)
print("f1-score:", f1_score(y_test, y_pred02))
print("ROC AUC Score:", roc_auc)
# predict the test data
y_pred02 = best_model02.predict(X_test)
y_pred02
# **Learning Curve**
from sklearn.model_selection import learning_curve
# Create the learning curve
train_sizes, train_scores, test_scores = learning_curve(
best_model02, X, y, cv=5, train_sizes=np.linspace(0.1, 1.0, 10), scoring="accuracy"
)
# Calculate the mean and standard deviation of the training scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Calculate the mean and standard deviation of the test scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot the learning curve
plt.plot(train_sizes, train_mean, label="Training score")
plt.plot(train_sizes, test_mean, label="Cross-validation score")
# Add the standard deviation bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std, alpha=0.1)
plt.fill_between(train_sizes, test_mean - test_std, test_mean + test_std, alpha=0.1)
# Add labels and legend
plt.xlabel("Number of training samples")
plt.ylabel("Accuracy score")
plt.title("Learning Curve (Gradient Boosting Classifier)")
plt.legend(loc="best")
# Show the plot
plt.show()
# ***
# **Insight**
# ***
# Terdapat kenaikan pada learning curve data validasi dan learning curve data latih stabil dengan akurasi yang tinggi, maka model tersebut menunjukkan bahwa model tersebut cukup baik dan mampu melakukan generalisasi pada data baru dengan akurat.
# Kenaikan pada learning curve data validasi menunjukkan bahwa model dapat mempelajari pola-pola umum pada dataset dan mampu melakukan prediksi dengan akurat pada data yang belum pernah dilihat sebelumnya. Sedangkan learning curve data latih yang stabil menunjukkan bahwa model tidak terlalu overfitting pada data latih dan mampu melakukan prediksi dengan akurat pada data yang sudah dikenal sebelumnya.
# **ROC Curve**
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, y_pred02)
# calculate AUC
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC Curve (area = %0.2f)" % roc_auc)
plt.plot([0, 1], [0, 1], "k--", label="Random Guess")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend()
plt.show()
from sklearn.metrics import f1_score
# calculate F1-score for each threshold
f1_scores = [f1_score(y_test, (y_pred02 >= t).astype(int)) for t in thresholds]
# find optimal threshold
optimal_idx = np.argmax(f1_scores)
optimal_threshold = thresholds[optimal_idx]
print("Optimal threshold: ", optimal_threshold)
# ***
# **Insight**
# ***
# - Grafik ROC curve menunjukkan trade-off antara True Positive Rate (TPR) dan False Positive Rate (FPR) pada berbagai threshold yang berbeda.
# - Semakin dekat dengan titik (0,1) atau sudut kiri atas, maka semakin baik performa model, seperti yang ditunjukkan pada ROC curve diatas.
# - Area Under Curve (AUC) adalah ukuran dari luas area di bawah kurva ROC. Semakin besar nilai AUC, semakin baik performa model.
# - Nilai AUC = 0.99, maka memiliki performa model yang baik dalam membedakan kelas positif dan negatif.
# - Threshold optimal yang diperoleh yaitu 1, yang mana nilai ini yang memberikan keseimbangan antara TPR dan FPR atau dengan kata lain, threshold yang memberikan nilai TPR dan FPR yang seimbang pada performa model.
# # **KNN Classifier**
# **Grid Search**
# Fit the model on train data
model3 = KNeighborsClassifier()
model3.fit(X_train, y_train)
# Evaluate the model on the test data
y_pred_3 = model3.predict(X_test)
print(f1_score(y_test, y_pred_3))
# Create the parameter grid
hyperparameter_space3 = {
"n_neighbors": [3, 5, 7, 9, 11],
"weights": ["uniform", "distance"],
"algorithm": ["ball_tree", "kd_tree", "brute"],
}
# melakukan Grid Search untuk mencari kombinasi terbaik dari hyperparameter
clf3 = GridSearchCV(model3, param_grid=hyperparameter_space3, cv=5)
# Run the Grid Search CV
clf3.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", clf3.best_params_)
clf3.best_params_, clf3.best_score_
# get the best hyperparameters
best_params3 = clf3.best_params_
# use the best hyperparameters to create a model
best_model3 = KNeighborsClassifier(**best_params3)
# fit the model to the training data
best_model3.fit(X_train, y_train)
# predict the test data
y_pred3 = best_model3.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred3)
roc_auc = roc_auc_score(y_test, y_pred3)
f1 = f1_score(y_test, y_pred3)
print("Accuracy:", accuracy_score(y_test, y_pred3))
print("Best Hyperparameters:", best_params3)
print("f1-score:", f1_score(y_test, y_pred3))
print("ROC AUC Score:", roc_auc)
# **Random Search**
model_3 = KNeighborsClassifier()
hyperparameter_space03 = {
"n_neighbors": randint(1, 50), # jumlah tetangga terdekat
"weights": ["uniform", "distance"], # bobot jarak tetangga terdekat
"algorithm": [
"ball_tree",
"kd_tree",
"brute",
], # algoritma pencarian tetangga terdekat
"leaf_size": randint(1, 100), # ukuran daun untuk algoritma ball_tree atau kd_tree
}
# melakukan Randomized Search untuk mencari kombinasi terbaik dari hyperparameter
random_search03 = RandomizedSearchCV(
model_3,
hyperparameter_space03,
scoring="f1",
cv=5,
random_state=42,
n_jobs=-1,
refit=True,
verbose=2,
)
random_search03.fit(X_train, y_train)
# menampilkan hyperparameter terbaik
print("Best Hyperparameters:", random_search03.best_params_)
# Run the Grid Search CV
random_search03.fit(X_train, y_train)
random_search03.best_params_, random_search03.best_score_
# get the best hyperparameters
best_params03 = random_search03.best_params_
# use the best hyperparameters to create a model
best_model03 = KNeighborsClassifier(**best_params03)
# fit the model to the training data
best_model03.fit(X_train, y_train)
# predict the test data
y_pred03 = best_model03.predict(X_test)
# evaluate the model
accuracy = accuracy_score(y_test, y_pred03)
roc_auc = roc_auc_score(y_test, y_pred03)
f1 = f1_score(y_test, y_pred03)
print("Accuracy:", accuracy_score(y_test, y_pred03))
print("Best Hyperparameters:", best_params03)
print("f1-score:", f1_score(y_test, y_pred03))
print("ROC AUC Score:", roc_auc_score(y_test, y_pred03))
# **Learning Curve**
from sklearn.model_selection import learning_curve
# Create the learning curve
train_sizes, train_scores, test_scores = learning_curve(
best_model03, X, y, cv=5, train_sizes=np.linspace(0.1, 1.0, 10), scoring="accuracy"
)
# Calculate the mean and standard deviation of the training scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Calculate the mean and standard deviation of the test scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot the learning curve
plt.plot(train_sizes, train_mean, label="Training score")
plt.plot(train_sizes, test_mean, label="Cross-validation score")
# Add the standard deviation bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std, alpha=0.1)
plt.fill_between(train_sizes, test_mean - test_std, test_mean + test_std, alpha=0.1)
# Add labels and legend
plt.xlabel("Number of training samples")
plt.ylabel("Accuracy score")
plt.title("Learning Curve (KNN Classifier)")
plt.legend(loc="best")
# Show the plot
plt.show()
# ***
# **Insight**
# ***
# Terdapat kenaikan pada learning curve data validasi dan learning curve data latih stabil dengan akurasi yang tinggi, maka model tersebut menunjukkan bahwa model tersebut cukup baik dan mampu melakukan generalisasi pada data baru dengan akurat.
# Kenaikan pada learning curve data validasi menunjukkan bahwa model dapat mempelajari pola-pola umum pada dataset dan mampu melakukan prediksi dengan akurat pada data yang belum pernah dilihat sebelumnya. Sedangkan learning curve data latih yang stabil menunjukkan bahwa model tidak terlalu overfitting pada data latih dan mampu melakukan prediksi dengan akurat pada data yang sudah dikenal sebelumnya.
# **ROC Curve**
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_test, y_pred03)
# calculate AUC
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label="ROC Curve (area = %0.2f)" % roc_auc)
plt.plot([0, 1], [0, 1], "k--", label="Random Guess")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend()
plt.show()
from sklearn.metrics import f1_score
# calculate F1-score for each threshold
f1_scores = [f1_score(y_test, (y_pred03 >= t).astype(int)) for t in thresholds]
# find optimal threshold
optimal_idx = np.argmax(f1_scores)
optimal_threshold = thresholds[optimal_idx]
print("Optimal threshold: ", optimal_threshold)
# ***
# **Insight**
# ***
# - Grafik ROC curve menunjukkan trade-off antara True Positive Rate (TPR) dan False Positive Rate (FPR) pada berbagai threshold yang berbeda.
# - Semakin dekat dengan titik (0,1) atau sudut kiri atas, maka semakin baik performa model, seperti yang ditunjukkan pada ROC curve diatas.
# - Area Under Curve (AUC) adalah ukuran dari luas area di bawah kurva ROC. Semakin besar nilai AUC, semakin baik performa model.
# - Nilai AUC = 0.99, maka memiliki performa model yang baik dalam membedakan kelas positif dan negatif.
# - Threshold optimal yang diperoleh yaitu 1, yang mana nilai ini yang memberikan keseimbangan antara TPR dan FPR atau dengan kata lain, threshold yang memberikan nilai TPR dan FPR yang seimbang pada performa model.
# # **Interpretasi Pemilihan Model**
# Membuat list hasil evaluasi model berdasarkan model yang diperoleh melalui hyperparameter tuning
eval_list = [
{
"model": "Random Forest Classifier",
" F1-Score_train": random_search01.best_score_,
"Accuracy_Test": accuracy_score(y_test, y_pred01),
"Best Hyperparameters": best_params01,
"F1-Score_Test": f1_score(y_test, y_pred01),
"ROC AUC Score": roc_auc_score(y_test, y_pred01),
},
{
"model": "Gradient Boosting Classifier",
" F1-Score_train": random_search02.best_score_,
"Accuracy_Test": accuracy_score(y_test, y_pred02),
"Best Hyperparameters": best_params02,
"F1-Score_Test": f1_score(y_test, y_pred02),
"ROC AUC Score": roc_auc_score(y_test, y_pred02),
},
{
"model": "KNN Classifier",
" F1-Score_train": random_search03.best_score_,
"Accuracy_Test": accuracy_score(y_test, y_pred03),
"Best Hyperparameters": best_params03,
"F1-Score_Test": f1_score(y_test, y_pred03),
"ROC AUC Score": roc_auc_score(y_test, y_pred03),
},
]
# Membuat dataframe dari list evaluasi model
eval_df = pd.DataFrame(eval_list)
# Menampilkan dataframe hasil evaluasi model
eval_df
|
[{"heart-disease-dataset/heart.csv": {"column_names": "[\"age\", \"sex\", \"cp\", \"trestbps\", \"chol\", \"fbs\", \"restecg\", \"thalach\", \"exang\", \"oldpeak\", \"slope\", \"ca\", \"thal\", \"target\"]", "column_data_types": "{\"age\": \"int64\", \"sex\": \"int64\", \"cp\": \"int64\", \"trestbps\": \"int64\", \"chol\": \"int64\", \"fbs\": \"int64\", \"restecg\": \"int64\", \"thalach\": \"int64\", \"exang\": \"int64\", \"oldpeak\": \"float64\", \"slope\": \"int64\", \"ca\": \"int64\", \"thal\": \"int64\", \"target\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1025 entries, 0 to 1024\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 1025 non-null int64 \n 1 sex 1025 non-null int64 \n 2 cp 1025 non-null int64 \n 3 trestbps 1025 non-null int64 \n 4 chol 1025 non-null int64 \n 5 fbs 1025 non-null int64 \n 6 restecg 1025 non-null int64 \n 7 thalach 1025 non-null int64 \n 8 exang 1025 non-null int64 \n 9 oldpeak 1025 non-null float64\n 10 slope 1025 non-null int64 \n 11 ca 1025 non-null int64 \n 12 thal 1025 non-null int64 \n 13 target 1025 non-null int64 \ndtypes: float64(1), int64(13)\nmemory usage: 112.2 KB\n", "summary": "{\"age\": {\"count\": 1025.0, \"mean\": 54.43414634146342, \"std\": 9.072290233244278, \"min\": 29.0, \"25%\": 48.0, \"50%\": 56.0, \"75%\": 61.0, \"max\": 77.0}, \"sex\": {\"count\": 1025.0, \"mean\": 0.6956097560975609, \"std\": 0.4603733241196493, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}, \"cp\": {\"count\": 1025.0, \"mean\": 0.9424390243902439, \"std\": 1.029640743645865, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 3.0}, \"trestbps\": {\"count\": 1025.0, \"mean\": 131.61170731707318, \"std\": 17.516718005376408, \"min\": 94.0, \"25%\": 120.0, \"50%\": 130.0, \"75%\": 140.0, \"max\": 200.0}, \"chol\": {\"count\": 1025.0, \"mean\": 246.0, \"std\": 51.59251020618206, \"min\": 126.0, \"25%\": 211.0, \"50%\": 240.0, \"75%\": 275.0, \"max\": 564.0}, \"fbs\": {\"count\": 1025.0, \"mean\": 0.14926829268292682, \"std\": 0.3565266897271575, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"restecg\": {\"count\": 1025.0, \"mean\": 0.5297560975609756, \"std\": 0.5278775668748921, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 2.0}, \"thalach\": {\"count\": 1025.0, \"mean\": 149.11414634146342, \"std\": 23.005723745977207, \"min\": 71.0, \"25%\": 132.0, \"50%\": 152.0, \"75%\": 166.0, \"max\": 202.0}, \"exang\": {\"count\": 1025.0, \"mean\": 0.33658536585365856, \"std\": 0.47277237600371186, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"oldpeak\": {\"count\": 1025.0, \"mean\": 1.0715121951219515, \"std\": 1.175053255150176, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.8, \"75%\": 1.8, \"max\": 6.2}, \"slope\": {\"count\": 1025.0, \"mean\": 1.3853658536585365, \"std\": 0.6177552671745918, \"min\": 0.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 2.0}, \"ca\": {\"count\": 1025.0, \"mean\": 0.7541463414634146, \"std\": 1.0307976650242823, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 4.0}, \"thal\": {\"count\": 1025.0, \"mean\": 2.32390243902439, \"std\": 0.6206602380510298, \"min\": 0.0, \"25%\": 2.0, \"50%\": 2.0, \"75%\": 3.0, \"max\": 3.0}, \"target\": {\"count\": 1025.0, \"mean\": 0.5131707317073171, \"std\": 0.5000704980788014, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}}", "examples": "{\"age\":{\"0\":52,\"1\":53,\"2\":70,\"3\":61},\"sex\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1},\"cp\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"trestbps\":{\"0\":125,\"1\":140,\"2\":145,\"3\":148},\"chol\":{\"0\":212,\"1\":203,\"2\":174,\"3\":203},\"fbs\":{\"0\":0,\"1\":1,\"2\":0,\"3\":0},\"restecg\":{\"0\":1,\"1\":0,\"2\":1,\"3\":1},\"thalach\":{\"0\":168,\"1\":155,\"2\":125,\"3\":161},\"exang\":{\"0\":0,\"1\":1,\"2\":1,\"3\":0},\"oldpeak\":{\"0\":1.0,\"1\":3.1,\"2\":2.6,\"3\":0.0},\"slope\":{\"0\":2,\"1\":0,\"2\":0,\"3\":2},\"ca\":{\"0\":2,\"1\":0,\"2\":0,\"3\":1},\"thal\":{\"0\":3,\"1\":3,\"2\":3,\"3\":3},\"target\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0}}"}}]
| true | 1 |
<start_data_description><data_path>heart-disease-dataset/heart.csv:
<column_names>
['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal', 'target']
<column_types>
{'age': 'int64', 'sex': 'int64', 'cp': 'int64', 'trestbps': 'int64', 'chol': 'int64', 'fbs': 'int64', 'restecg': 'int64', 'thalach': 'int64', 'exang': 'int64', 'oldpeak': 'float64', 'slope': 'int64', 'ca': 'int64', 'thal': 'int64', 'target': 'int64'}
<dataframe_Summary>
{'age': {'count': 1025.0, 'mean': 54.43414634146342, 'std': 9.072290233244278, 'min': 29.0, '25%': 48.0, '50%': 56.0, '75%': 61.0, 'max': 77.0}, 'sex': {'count': 1025.0, 'mean': 0.6956097560975609, 'std': 0.4603733241196493, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}, 'cp': {'count': 1025.0, 'mean': 0.9424390243902439, 'std': 1.029640743645865, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 2.0, 'max': 3.0}, 'trestbps': {'count': 1025.0, 'mean': 131.61170731707318, 'std': 17.516718005376408, 'min': 94.0, '25%': 120.0, '50%': 130.0, '75%': 140.0, 'max': 200.0}, 'chol': {'count': 1025.0, 'mean': 246.0, 'std': 51.59251020618206, 'min': 126.0, '25%': 211.0, '50%': 240.0, '75%': 275.0, 'max': 564.0}, 'fbs': {'count': 1025.0, 'mean': 0.14926829268292682, 'std': 0.3565266897271575, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'restecg': {'count': 1025.0, 'mean': 0.5297560975609756, 'std': 0.5278775668748921, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 2.0}, 'thalach': {'count': 1025.0, 'mean': 149.11414634146342, 'std': 23.005723745977207, 'min': 71.0, '25%': 132.0, '50%': 152.0, '75%': 166.0, 'max': 202.0}, 'exang': {'count': 1025.0, 'mean': 0.33658536585365856, 'std': 0.47277237600371186, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'oldpeak': {'count': 1025.0, 'mean': 1.0715121951219515, 'std': 1.175053255150176, 'min': 0.0, '25%': 0.0, '50%': 0.8, '75%': 1.8, 'max': 6.2}, 'slope': {'count': 1025.0, 'mean': 1.3853658536585365, 'std': 0.6177552671745918, 'min': 0.0, '25%': 1.0, '50%': 1.0, '75%': 2.0, 'max': 2.0}, 'ca': {'count': 1025.0, 'mean': 0.7541463414634146, 'std': 1.0307976650242823, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 4.0}, 'thal': {'count': 1025.0, 'mean': 2.32390243902439, 'std': 0.6206602380510298, 'min': 0.0, '25%': 2.0, '50%': 2.0, '75%': 3.0, 'max': 3.0}, 'target': {'count': 1025.0, 'mean': 0.5131707317073171, 'std': 0.5000704980788014, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}}
<dataframe_info>
RangeIndex: 1025 entries, 0 to 1024
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 1025 non-null int64
1 sex 1025 non-null int64
2 cp 1025 non-null int64
3 trestbps 1025 non-null int64
4 chol 1025 non-null int64
5 fbs 1025 non-null int64
6 restecg 1025 non-null int64
7 thalach 1025 non-null int64
8 exang 1025 non-null int64
9 oldpeak 1025 non-null float64
10 slope 1025 non-null int64
11 ca 1025 non-null int64
12 thal 1025 non-null int64
13 target 1025 non-null int64
dtypes: float64(1), int64(13)
memory usage: 112.2 KB
<some_examples>
{'age': {'0': 52, '1': 53, '2': 70, '3': 61}, 'sex': {'0': 1, '1': 1, '2': 1, '3': 1}, 'cp': {'0': 0, '1': 0, '2': 0, '3': 0}, 'trestbps': {'0': 125, '1': 140, '2': 145, '3': 148}, 'chol': {'0': 212, '1': 203, '2': 174, '3': 203}, 'fbs': {'0': 0, '1': 1, '2': 0, '3': 0}, 'restecg': {'0': 1, '1': 0, '2': 1, '3': 1}, 'thalach': {'0': 168, '1': 155, '2': 125, '3': 161}, 'exang': {'0': 0, '1': 1, '2': 1, '3': 0}, 'oldpeak': {'0': 1.0, '1': 3.1, '2': 2.6, '3': 0.0}, 'slope': {'0': 2, '1': 0, '2': 0, '3': 2}, 'ca': {'0': 2, '1': 0, '2': 0, '3': 1}, 'thal': {'0': 3, '1': 3, '2': 3, '3': 3}, 'target': {'0': 0, '1': 0, '2': 0, '3': 0}}
<end_description>
| 16,065 | 0 | 17,402 | 16,065 |
129219616
|
# # Exploring Hacker News Posts
# In this project, we'll compare two different types of posts from Hacker News, a popular site where technology related stories (or 'posts') are voted and commented upon. The two types of posts we'll explore begin with either `Ask HN` or `Show HN`.
# Users submit `Ask HN` posts to ask the Hacker News community a specific question, such as "What is the best online course you've ever taken?" Likewise, users submit Show HN posts to show the Hacker News community a project, product, or just generally something interesting.
# We'll specifically compare these two types of posts to determine the following:
# - Do `Ask HN` or `Show HN` receive more comments on average?
# - Do posts created at a certain time receive more comments on average?
#
# It should be noted that the data set we're working with was reduced from almost 300,000 rows to approximately 20,000 rows by removing all submissions that did not receive any comments, and then randomly sampling from the remaining submissions.
# ## Introduction
# First, we'll read in the data and remove the headers.
from csv import reader
opened_file = open("Hacker_news.csv")
read_file = reader(opened_file)
hn = list(read_file)
print(hn[:5])
# Notice that the first list in the inner lists contains the column headers, and the lists after contain the data for one row. In order to analyze our data, we need to first remove the row containing the column headers. Let's remove that first row next.
headers = hn[0]
hn = hn[1:]
print(headers)
print("\n")
print(hn[:5])
# ## Extracting Ask HN and Show HN Posts
# Now that we've removed the headers from `hn`, we're ready to filter our data. Since we're only concerned with post titles beginning with `Ask HN` or `Show HN`, we'll create new lists of lists containing just the data for those titles.
ask_posts = []
show_posts = []
other_posts = []
for post in hn:
title = post[1]
if title.lower().startswith("ask hn"):
ask_posts.append(post)
elif title.lower().startswith("show hn"):
show_posts.append(post)
else:
other_posts.append(post)
print("number of ask posts:", len(ask_posts))
print("number of show posts:", len(show_posts))
print("number of other posts:", len(other_posts))
print(ask_posts[:5])
print(show_posts[:5])
print(other_posts[:5])
# average number of comments in ask posts
total_ask_comments = 0
for row in ask_posts:
n_comments = float(row[4])
total_ask_comments += n_comments
avg_ask_comments = total_ask_comments / len(ask_posts)
total_show_comments = 0
# average number of comments in show posts
for row in show_posts:
n_comments = float(row[4])
n_show_comments += 1
total_show_comments += n_comments
avg_show_comments = total_show_comments / len(show_posts)
print(avg_ask_comments)
print(avg_show_comments)
# On average, ask posts in our sample receive approximately 10 comments, whereas show posts receive approximately 4. Since ask posts are more likely to receive comments, we'll focus our remaining analysis just on these posts.
# ## Finding the Number of Ask Posts and Cooments by Hour Created
# we'll determine if ask posts created at a certain time are more likely to attract comments. We'll use the following steps to perform this analysis:
# 1. Calculate the number of ask posts created in each hour of the day, along with the number of comments received.
# 2. Calculate the average number of comments ask posts receive by hour created.
import datetime as dt
result_list = []
for row in ask_posts:
result_list.append([row[6], int(row[4])])
counts_by_hour = {}
comments_by_hour = {}
date_format = "%m/%d/%Y %H:%M"
for row in result_list:
date_dt = dt.datetime.strptime(row[0], date_format)
hour = date_dt.strftime("%H")
if hour not in counts_by_hour:
counts_by_hour[hour] = 1
comments_by_hour[hour] = row[1]
else:
counts_by_hour[hour] += 1
comments_by_hour[hour] += row[1]
comments_by_hour
# ## Calculating the Average Number of Comments for Ask HN Posts by Hour
# Calculate the average amount of comments `Ask HN` posts created at each hour of the day receive.
avg_by_hour = []
for hr in comments_by_hour:
avg_by_hour.append([hr, comments_by_hour[hr] / counts_by_hour[hr]])
avg_by_hour
swap_avg_by_hour = []
for row in avg_by_hour:
swap_avg_by_hour.append([row[1], row[0]])
print(swap_avg_by_hour)
sorted_swap = sorted(swap_avg_by_hour, reverse=True)
sorted_swap
# Sort the values and print the the 5 hours with the highest average comments.
print("Top 5 Hours for 'Ask HN' Comments")
for [avg, hr] in sorted_swap[:5]: # alternative syntax
print(
"{}: {:.2f} average comments per post".format(
dt.datetime.strptime(hr, "%H").strftime("%H:%M"), avg
)
)
# dt.datetime.strptime(hr, "%H").strftime("%H:%M"), -> datetime.strptime() constructor to return a datetime object, and then use the strftime() method to specify the format of the time.
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/219/129219616.ipynb
| null | null |
[{"Id": 129219616, "ScriptId": 38417294, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8032771, "CreationDate": "05/12/2023 00:26:04", "VersionNumber": 1.0, "Title": "notebooka804fb3072", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 144.0, "LinesInsertedFromPrevious": 144.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Exploring Hacker News Posts
# In this project, we'll compare two different types of posts from Hacker News, a popular site where technology related stories (or 'posts') are voted and commented upon. The two types of posts we'll explore begin with either `Ask HN` or `Show HN`.
# Users submit `Ask HN` posts to ask the Hacker News community a specific question, such as "What is the best online course you've ever taken?" Likewise, users submit Show HN posts to show the Hacker News community a project, product, or just generally something interesting.
# We'll specifically compare these two types of posts to determine the following:
# - Do `Ask HN` or `Show HN` receive more comments on average?
# - Do posts created at a certain time receive more comments on average?
#
# It should be noted that the data set we're working with was reduced from almost 300,000 rows to approximately 20,000 rows by removing all submissions that did not receive any comments, and then randomly sampling from the remaining submissions.
# ## Introduction
# First, we'll read in the data and remove the headers.
from csv import reader
opened_file = open("Hacker_news.csv")
read_file = reader(opened_file)
hn = list(read_file)
print(hn[:5])
# Notice that the first list in the inner lists contains the column headers, and the lists after contain the data for one row. In order to analyze our data, we need to first remove the row containing the column headers. Let's remove that first row next.
headers = hn[0]
hn = hn[1:]
print(headers)
print("\n")
print(hn[:5])
# ## Extracting Ask HN and Show HN Posts
# Now that we've removed the headers from `hn`, we're ready to filter our data. Since we're only concerned with post titles beginning with `Ask HN` or `Show HN`, we'll create new lists of lists containing just the data for those titles.
ask_posts = []
show_posts = []
other_posts = []
for post in hn:
title = post[1]
if title.lower().startswith("ask hn"):
ask_posts.append(post)
elif title.lower().startswith("show hn"):
show_posts.append(post)
else:
other_posts.append(post)
print("number of ask posts:", len(ask_posts))
print("number of show posts:", len(show_posts))
print("number of other posts:", len(other_posts))
print(ask_posts[:5])
print(show_posts[:5])
print(other_posts[:5])
# average number of comments in ask posts
total_ask_comments = 0
for row in ask_posts:
n_comments = float(row[4])
total_ask_comments += n_comments
avg_ask_comments = total_ask_comments / len(ask_posts)
total_show_comments = 0
# average number of comments in show posts
for row in show_posts:
n_comments = float(row[4])
n_show_comments += 1
total_show_comments += n_comments
avg_show_comments = total_show_comments / len(show_posts)
print(avg_ask_comments)
print(avg_show_comments)
# On average, ask posts in our sample receive approximately 10 comments, whereas show posts receive approximately 4. Since ask posts are more likely to receive comments, we'll focus our remaining analysis just on these posts.
# ## Finding the Number of Ask Posts and Cooments by Hour Created
# we'll determine if ask posts created at a certain time are more likely to attract comments. We'll use the following steps to perform this analysis:
# 1. Calculate the number of ask posts created in each hour of the day, along with the number of comments received.
# 2. Calculate the average number of comments ask posts receive by hour created.
import datetime as dt
result_list = []
for row in ask_posts:
result_list.append([row[6], int(row[4])])
counts_by_hour = {}
comments_by_hour = {}
date_format = "%m/%d/%Y %H:%M"
for row in result_list:
date_dt = dt.datetime.strptime(row[0], date_format)
hour = date_dt.strftime("%H")
if hour not in counts_by_hour:
counts_by_hour[hour] = 1
comments_by_hour[hour] = row[1]
else:
counts_by_hour[hour] += 1
comments_by_hour[hour] += row[1]
comments_by_hour
# ## Calculating the Average Number of Comments for Ask HN Posts by Hour
# Calculate the average amount of comments `Ask HN` posts created at each hour of the day receive.
avg_by_hour = []
for hr in comments_by_hour:
avg_by_hour.append([hr, comments_by_hour[hr] / counts_by_hour[hr]])
avg_by_hour
swap_avg_by_hour = []
for row in avg_by_hour:
swap_avg_by_hour.append([row[1], row[0]])
print(swap_avg_by_hour)
sorted_swap = sorted(swap_avg_by_hour, reverse=True)
sorted_swap
# Sort the values and print the the 5 hours with the highest average comments.
print("Top 5 Hours for 'Ask HN' Comments")
for [avg, hr] in sorted_swap[:5]: # alternative syntax
print(
"{}: {:.2f} average comments per post".format(
dt.datetime.strptime(hr, "%H").strftime("%H:%M"), avg
)
)
# dt.datetime.strptime(hr, "%H").strftime("%H:%M"), -> datetime.strptime() constructor to return a datetime object, and then use the strftime() method to specify the format of the time.
| false | 0 | 1,411 | 0 | 1,411 | 1,411 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.