file_id
stringlengths 5
9
| content
stringlengths 100
5.25M
| local_path
stringlengths 66
70
| kaggle_dataset_name
stringlengths 3
50
⌀ | kaggle_dataset_owner
stringlengths 3
20
⌀ | kversion
stringlengths 497
763
⌀ | kversion_datasetsources
stringlengths 71
5.46k
⌀ | dataset_versions
stringlengths 338
235k
⌀ | datasets
stringlengths 334
371
⌀ | users
stringlengths 111
264
⌀ | script
stringlengths 100
5.25M
| df_info
stringlengths 0
4.87M
| has_data_info
bool 2
classes | nb_filenames
int64 0
370
| retreived_data_description
stringlengths 0
4.44M
| script_nb_tokens
int64 25
663k
| upvotes
int64 0
1.65k
| tokens_description
int64 25
663k
| tokens_script
int64 25
663k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
129822326
|
<jupyter_start><jupyter_text>Cyclistic-Divvy-Trips-2021
### Context
This dataset was created in order to complete Case Study 1 of the Google Data Analytics Certificate.
### Content
This dataset contains 12 months of trip data from Chicago's Divvy Ride Share Service for the period January to December 2021. The dataset was acquired from [here](https://divvy-tripdata.s3.amazonaws.com/index.html).
The data has been made available by Motivate International Inc. under this [license](https://ride.divvybikes.com/data-license-agreement).
### Inspiration
I hope this will help others to also complete their case study.
Kaggle dataset identifier: divvytrips2021
<jupyter_script>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
list_of_2021 = [
"202101-divvy-tripdata.csv",
"202102-divvy-tripdata.csv",
"202103-divvy-tripdata.csv",
"202104-divvy-tripdata.csv",
"202105-divvy-tripdata.csv",
"202106-divvy-tripdata.csv",
"202107-divvy-tripdata.csv",
"202108-divvy-tripdata.csv",
"202109-divvy-tripdata.csv",
"202110-divvy-tripdata.csv",
"202111-divvy-tripdata.csv",
"202112-divvy-tripdata.csv",
]
df_2021 = pd.DataFrame()
for i in list_of_2021:
file_2021 = pd.read_csv(i)
df_2021 = pd.concat([df_2021, file_2021])
df_2021.info()
# Before we start our cleaning process of data, let's have a look on dataset. It have 13 columns and each column have either object data type or float64. As we can see above information about df_2021, columns like started_at, ended_at, started_station_name and other columns, their is a need to change datatype, also we need to do some operation like making new columns and removing extra spaces and various other. In order to simplify these steps we can make our own functions
def xtra_space(df_name): # removing extra spaces
for i in df_name.columns:
df_name[i] = df_name[i].astype(str).str.strip()
def datetime(df_name, date_list): # converting to datetime
for i in date_list:
df_name[i] = pd.to_datetime(df_name[i])
def category(df_name, category_list): # converting to category
for i in category_list:
df_name[i] = df_name[i].astype("category")
def drop(df_name, column_list): # drop the columns
df_name.drop(column_list, axis=1, inplace=True)
def day(df_name): # make columns of day name
df_name["day"] = df_name["started_at"].dt.day_name()
def month(df_name): # make column of month name
df_name["month"] = df_name["started_at"].dt.month_name()
df_2021.head()
# We need to check if their is any duplicate entries and remove them from ride_id column
df_2021 = df_2021.drop_duplicates()
# It is also important their should not be any missing values
print(df_2021[df_2021["ride_id"].isna()])
print(df_2021[df_2021["ride_id"] == "nan"])
drop(df_2021, ["ride_id"])
df_2021.info()
xtra_space(df_2021)
drop(df_2021, ["start_station_id", "end_station_id"])
datetime(df_2021, ["started_at", "ended_at"])
category(
df_2021,
["rideable_type", "start_station_name", "end_station_name", "member_casual"],
)
df_2021["start_lat"] = df_2021["start_lat"].astype("float")
df_2021["start_lng"] = df_2021["start_lng"].astype("float")
df_2021["end_lat"] = df_2021["end_lat"].astype("float")
df_2021["end_lng"] = df_2021["end_lng"].astype("float")
df_2021["trip_duration"] = (
df_2021["ended_at"] - df_2021["started_at"]
).dt.total_seconds()
df_2021.info()
df_2021["trip_duration"].mean()
df_2021["trip_duration"].max()
df_2021["trip_duration"].min()
plt.boxplot(df_2021["trip_duration"])
plt.show()
trip_duration_Q1 = df_2021["trip_duration"].quantile(0.25)
trip_duration_Q3 = df_2021["trip_duration"].quantile(0.75)
trip_duration_IQR = trip_duration_Q3 - trip_duration_Q1
trip_duration_outliers = (
(trip_duration_Q3 + 1.5 * trip_duration_IQR) < df_2021["trip_duration"]
) | ((trip_duration_Q1 - 1.5 * trip_duration_IQR) > df_2021["trip_duration"])
df_2021.drop(df_2021[trip_duration_outliers].index, axis=0, inplace=True)
df_2021.info()
df_2021["trip_duration"] = df_2021["trip_duration"] // 60
df_2021["trip_duration"] = df_2021["trip_duration"].astype("int8")
df_2021.info()
df_2021["trip_duration"].min()
df_2021.drop(df_2021[df_2021["trip_duration"] < 0].index, axis=0, inplace=True)
df_2021.info()
df_2021[df_2021["rideable_type"] == "nan"].info()
df_2021[df_2021["rideable_type"].isna()].info()
df_2021[df_2021["started_at"].isna()].info()
df_2021[df_2021["started_at"] == "nan"].info()
df_2021[df_2021["ended_at"].isna()].info()
df_2021[df_2021["ended_at"] == "nan"].info()
df_2021[df_2021["start_station_name"] == "nan"].info()
df_2021.drop(
df_2021[df_2021["start_station_name"] == "nan"].index, axis=0, inplace=True
)
df_2021[df_2021["end_station_name"] == "nan"].info()
df_2021.drop(df_2021[df_2021["end_station_name"] == "nan"].index, axis=0, inplace=True)
df_2021.info()
df_2021["rideable_type"].nunique()
df_2021["member_casual"].nunique()
df_2021["start_station_name"].nunique()
df_2021["end_station_name"].nunique()
day(df_2021)
df_2021.info()
month(df_2021)
df_2021.info()
category(df_2021, ["day", "month"])
df_2021.info()
df_2021 = df_2021[df_2021["trip_duration"] > 0]
df_2021.info()
df_member = df_2021[df_2021["member_casual"] == "member"]
df_member["trip_duration"].reset_index().describe()["trip_duration"]
max_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].max()
].shape[0]
min_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].min()
].shape[0]
median_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].median()
].shape[0]
mode_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].mode()[0]
].shape[0]
print("No. of member rides equal to Max of trip_duration:", max_no_trip_member)
print("No. of member rides equal to Min of trip_duration:", min_no_trip_member)
print("No. of member rides equal to Median of trip_duration:", median_no_trip_member)
print("No. of member rides equal to Mode of trip_duaration:", mode_no_trip_member)
df_casual = df_2021[df_2021["member_casual"] == "casual"]
df_casual["trip_duration"].reset_index().describe()["trip_duration"]
max_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].max()
].shape[0]
min_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].min()
].shape[0]
median_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].median()
].shape[0]
mode_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].mode()[0]
].shape[0]
print("No. of casual rides equal to Max of trip_duration:", max_no_trip_casual)
print("No. of casual rides equal to Min of trip_duration:", min_no_trip_casual)
print("No. of casual rides equal to Median of trip_duration:", median_no_trip_casual)
print("No. of casual rides equal to Mode of trip_duration:", mode_no_trip_casual)
df_2021.to_csv("df_2021.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/822/129822326.ipynb
|
divvytrips2021
|
michaeljohnsonjr
|
[{"Id": 129822326, "ScriptId": 38610372, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11888060, "CreationDate": "05/16/2023 18:07:18", "VersionNumber": 2.0, "Title": "Google Capstone on rider data 2021", "EvaluationDate": "05/16/2023", "IsChange": false, "TotalLines": 200.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 200.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186199829, "KernelVersionId": 129822326, "SourceDatasetVersionId": 3047131}]
|
[{"Id": 3047131, "DatasetId": 1865967, "DatasourceVersionId": 3095632, "CreatorUserId": 8513040, "LicenseName": "Other (specified in description)", "CreationDate": "01/15/2022 15:27:39", "VersionNumber": 1.0, "Title": "Cyclistic-Divvy-Trips-2021", "Slug": "divvytrips2021", "Subtitle": "Divvy Trip Data (January 2021 - December 2021)", "Description": "### Context\n\nThis dataset was created in order to complete Case Study 1 of the Google Data Analytics Certificate.\n\n### Content\n\nThis dataset contains 12 months of trip data from Chicago's Divvy Ride Share Service for the period January to December 2021. The dataset was acquired from [here](https://divvy-tripdata.s3.amazonaws.com/index.html).\n\nThe data has been made available by Motivate International Inc. under this [license](https://ride.divvybikes.com/data-license-agreement).\n\n### Inspiration\n\nI hope this will help others to also complete their case study.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1865967, "CreatorUserId": 8513040, "OwnerUserId": 8513040.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3047131.0, "CurrentDatasourceVersionId": 3095632.0, "ForumId": 1889028, "Type": 2, "CreationDate": "01/15/2022 15:27:39", "LastActivityDate": "01/15/2022", "TotalViews": 1827, "TotalDownloads": 346, "TotalVotes": 9, "TotalKernels": 64}]
|
[{"Id": 8513040, "UserName": "michaeljohnsonjr", "DisplayName": "Michael Johnson", "RegisterDate": "10/05/2021", "PerformanceTier": 1}]
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
list_of_2021 = [
"202101-divvy-tripdata.csv",
"202102-divvy-tripdata.csv",
"202103-divvy-tripdata.csv",
"202104-divvy-tripdata.csv",
"202105-divvy-tripdata.csv",
"202106-divvy-tripdata.csv",
"202107-divvy-tripdata.csv",
"202108-divvy-tripdata.csv",
"202109-divvy-tripdata.csv",
"202110-divvy-tripdata.csv",
"202111-divvy-tripdata.csv",
"202112-divvy-tripdata.csv",
]
df_2021 = pd.DataFrame()
for i in list_of_2021:
file_2021 = pd.read_csv(i)
df_2021 = pd.concat([df_2021, file_2021])
df_2021.info()
# Before we start our cleaning process of data, let's have a look on dataset. It have 13 columns and each column have either object data type or float64. As we can see above information about df_2021, columns like started_at, ended_at, started_station_name and other columns, their is a need to change datatype, also we need to do some operation like making new columns and removing extra spaces and various other. In order to simplify these steps we can make our own functions
def xtra_space(df_name): # removing extra spaces
for i in df_name.columns:
df_name[i] = df_name[i].astype(str).str.strip()
def datetime(df_name, date_list): # converting to datetime
for i in date_list:
df_name[i] = pd.to_datetime(df_name[i])
def category(df_name, category_list): # converting to category
for i in category_list:
df_name[i] = df_name[i].astype("category")
def drop(df_name, column_list): # drop the columns
df_name.drop(column_list, axis=1, inplace=True)
def day(df_name): # make columns of day name
df_name["day"] = df_name["started_at"].dt.day_name()
def month(df_name): # make column of month name
df_name["month"] = df_name["started_at"].dt.month_name()
df_2021.head()
# We need to check if their is any duplicate entries and remove them from ride_id column
df_2021 = df_2021.drop_duplicates()
# It is also important their should not be any missing values
print(df_2021[df_2021["ride_id"].isna()])
print(df_2021[df_2021["ride_id"] == "nan"])
drop(df_2021, ["ride_id"])
df_2021.info()
xtra_space(df_2021)
drop(df_2021, ["start_station_id", "end_station_id"])
datetime(df_2021, ["started_at", "ended_at"])
category(
df_2021,
["rideable_type", "start_station_name", "end_station_name", "member_casual"],
)
df_2021["start_lat"] = df_2021["start_lat"].astype("float")
df_2021["start_lng"] = df_2021["start_lng"].astype("float")
df_2021["end_lat"] = df_2021["end_lat"].astype("float")
df_2021["end_lng"] = df_2021["end_lng"].astype("float")
df_2021["trip_duration"] = (
df_2021["ended_at"] - df_2021["started_at"]
).dt.total_seconds()
df_2021.info()
df_2021["trip_duration"].mean()
df_2021["trip_duration"].max()
df_2021["trip_duration"].min()
plt.boxplot(df_2021["trip_duration"])
plt.show()
trip_duration_Q1 = df_2021["trip_duration"].quantile(0.25)
trip_duration_Q3 = df_2021["trip_duration"].quantile(0.75)
trip_duration_IQR = trip_duration_Q3 - trip_duration_Q1
trip_duration_outliers = (
(trip_duration_Q3 + 1.5 * trip_duration_IQR) < df_2021["trip_duration"]
) | ((trip_duration_Q1 - 1.5 * trip_duration_IQR) > df_2021["trip_duration"])
df_2021.drop(df_2021[trip_duration_outliers].index, axis=0, inplace=True)
df_2021.info()
df_2021["trip_duration"] = df_2021["trip_duration"] // 60
df_2021["trip_duration"] = df_2021["trip_duration"].astype("int8")
df_2021.info()
df_2021["trip_duration"].min()
df_2021.drop(df_2021[df_2021["trip_duration"] < 0].index, axis=0, inplace=True)
df_2021.info()
df_2021[df_2021["rideable_type"] == "nan"].info()
df_2021[df_2021["rideable_type"].isna()].info()
df_2021[df_2021["started_at"].isna()].info()
df_2021[df_2021["started_at"] == "nan"].info()
df_2021[df_2021["ended_at"].isna()].info()
df_2021[df_2021["ended_at"] == "nan"].info()
df_2021[df_2021["start_station_name"] == "nan"].info()
df_2021.drop(
df_2021[df_2021["start_station_name"] == "nan"].index, axis=0, inplace=True
)
df_2021[df_2021["end_station_name"] == "nan"].info()
df_2021.drop(df_2021[df_2021["end_station_name"] == "nan"].index, axis=0, inplace=True)
df_2021.info()
df_2021["rideable_type"].nunique()
df_2021["member_casual"].nunique()
df_2021["start_station_name"].nunique()
df_2021["end_station_name"].nunique()
day(df_2021)
df_2021.info()
month(df_2021)
df_2021.info()
category(df_2021, ["day", "month"])
df_2021.info()
df_2021 = df_2021[df_2021["trip_duration"] > 0]
df_2021.info()
df_member = df_2021[df_2021["member_casual"] == "member"]
df_member["trip_duration"].reset_index().describe()["trip_duration"]
max_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].max()
].shape[0]
min_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].min()
].shape[0]
median_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].median()
].shape[0]
mode_no_trip_member = df_member[
df_member["trip_duration"] == df_member["trip_duration"].mode()[0]
].shape[0]
print("No. of member rides equal to Max of trip_duration:", max_no_trip_member)
print("No. of member rides equal to Min of trip_duration:", min_no_trip_member)
print("No. of member rides equal to Median of trip_duration:", median_no_trip_member)
print("No. of member rides equal to Mode of trip_duaration:", mode_no_trip_member)
df_casual = df_2021[df_2021["member_casual"] == "casual"]
df_casual["trip_duration"].reset_index().describe()["trip_duration"]
max_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].max()
].shape[0]
min_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].min()
].shape[0]
median_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].median()
].shape[0]
mode_no_trip_casual = df_casual[
df_casual["trip_duration"] == df_casual["trip_duration"].mode()[0]
].shape[0]
print("No. of casual rides equal to Max of trip_duration:", max_no_trip_casual)
print("No. of casual rides equal to Min of trip_duration:", min_no_trip_casual)
print("No. of casual rides equal to Median of trip_duration:", median_no_trip_casual)
print("No. of casual rides equal to Mode of trip_duration:", mode_no_trip_casual)
df_2021.to_csv("df_2021.csv")
| false | 0 | 2,609 | 0 | 2,791 | 2,609 |
||
129822099
|
<jupyter_start><jupyter_text>citadel_photo
Kaggle dataset identifier: citadel-photo
<jupyter_script>from IPython.display import Image
Image(filename="/kaggle/input/citadel-photo/Qaitbay-Citadel-Trips-in-Egypt-2.jpg")
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
img_path = "/kaggle/input/citadel-photo/Qaitbay-Citadel-Trips-in-Egypt-2.jpg"
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights="imagenet")
preds = model.predict(x)
print("Predicted:", decode_predictions(preds, top=3)[0])
def classify(img_path):
display(Image(filename=img_path))
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print("Predicted:", decode_predictions(preds, top=3)[0])
classify("PHOTO_PATH")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/822/129822099.ipynb
|
citadel-photo
|
drgalal
|
[{"Id": 129822099, "ScriptId": 38605968, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13408885, "CreationDate": "05/16/2023 18:05:16", "VersionNumber": 2.0, "Title": "Transferlearning_resnet50", "EvaluationDate": "05/16/2023", "IsChange": false, "TotalLines": 34.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 34.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186199524, "KernelVersionId": 129822099, "SourceDatasetVersionId": 5699899}]
|
[{"Id": 5699899, "DatasetId": 3277568, "DatasourceVersionId": 5775560, "CreatorUserId": 13408885, "LicenseName": "Unknown", "CreationDate": "05/16/2023 15:59:57", "VersionNumber": 1.0, "Title": "citadel_photo", "Slug": "citadel-photo", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3277568, "CreatorUserId": 13408885, "OwnerUserId": 13408885.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5699899.0, "CurrentDatasourceVersionId": 5775560.0, "ForumId": 3343260, "Type": 2, "CreationDate": "05/16/2023 15:59:57", "LastActivityDate": "05/16/2023", "TotalViews": 23, "TotalDownloads": 0, "TotalVotes": 0, "TotalKernels": 0}]
|
[{"Id": 13408885, "UserName": "drgalal", "DisplayName": "Dr Galal", "RegisterDate": "01/25/2023", "PerformanceTier": 0}]
|
from IPython.display import Image
Image(filename="/kaggle/input/citadel-photo/Qaitbay-Citadel-Trips-in-Egypt-2.jpg")
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
img_path = "/kaggle/input/citadel-photo/Qaitbay-Citadel-Trips-in-Egypt-2.jpg"
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights="imagenet")
preds = model.predict(x)
print("Predicted:", decode_predictions(preds, top=3)[0])
def classify(img_path):
display(Image(filename=img_path))
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print("Predicted:", decode_predictions(preds, top=3)[0])
classify("PHOTO_PATH")
| false | 0 | 353 | 0 | 375 | 353 |
||
129822779
|
<jupyter_start><jupyter_text>Black Friday
Kaggle dataset identifier: black-friday
<jupyter_code>import pandas as pd
df = pd.read_csv('black-friday/train.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 550068 entries, 0 to 550067
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 User_ID 550068 non-null int64
1 Product_ID 550068 non-null object
2 Gender 550068 non-null object
3 Age 550068 non-null object
4 Occupation 550068 non-null int64
5 City_Category 550068 non-null object
6 Stay_In_Current_City_Years 550068 non-null object
7 Marital_Status 550068 non-null int64
8 Product_Category_1 550068 non-null int64
9 Product_Category_2 376430 non-null float64
10 Product_Category_3 166821 non-null float64
11 Purchase 550068 non-null int64
dtypes: float64(2), int64(5), object(5)
memory usage: 50.4+ MB
<jupyter_text>Examples:
{
"User_ID": 1000001,
"Product_ID": "P00069042",
"Gender": "F",
"Age": "0-17",
"Occupation": 10,
"City_Category": "A",
"Stay_In_Current_City_Years": 2,
"Marital_Status": 0,
"Product_Category_1": 3,
"Product_Category_2": NaN,
"Product_Category_3": NaN,
"Purchase": 8370
}
{
"User_ID": 1000001,
"Product_ID": "P00248942",
"Gender": "F",
"Age": "0-17",
"Occupation": 10,
"City_Category": "A",
"Stay_In_Current_City_Years": 2,
"Marital_Status": 0,
"Product_Category_1": 1,
"Product_Category_2": 6.0,
"Product_Category_3": 14.0,
"Purchase": 15200
}
{
"User_ID": 1000001,
"Product_ID": "P00087842",
"Gender": "F",
"Age": "0-17",
"Occupation": 10,
"City_Category": "A",
"Stay_In_Current_City_Years": 2,
"Marital_Status": 0,
"Product_Category_1": 12,
"Product_Category_2": NaN,
"Product_Category_3": NaN,
"Purchase": 1422
}
{
"User_ID": 1000001,
"Product_ID": "P00085442",
"Gender": "F",
"Age": "0-17",
"Occupation": 10,
"City_Category": "A",
"Stay_In_Current_City_Years": 2,
"Marital_Status": 0,
"Product_Category_1": 12,
"Product_Category_2": 14.0,
"Product_Category_3": NaN,
"Purchase": 1057
}
<jupyter_code>import pandas as pd
df = pd.read_csv('black-friday/test.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 233599 entries, 0 to 233598
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 User_ID 233599 non-null int64
1 Product_ID 233599 non-null object
2 Gender 233599 non-null object
3 Age 233599 non-null object
4 Occupation 233599 non-null int64
5 City_Category 233599 non-null object
6 Stay_In_Current_City_Years 233599 non-null object
7 Marital_Status 233599 non-null int64
8 Product_Category_1 233599 non-null int64
9 Product_Category_2 161255 non-null float64
10 Product_Category_3 71037 non-null float64
dtypes: float64(2), int64(4), object(5)
memory usage: 19.6+ MB
<jupyter_text>Examples:
{
"User_ID": 1000004,
"Product_ID": "P00128942",
"Gender": "M",
"Age": "46-50",
"Occupation": 7,
"City_Category": "B",
"Stay_In_Current_City_Years": "2",
"Marital_Status": 1,
"Product_Category_1": 1,
"Product_Category_2": 11,
"Product_Category_3": NaN
}
{
"User_ID": 1000009,
"Product_ID": "P00113442",
"Gender": "M",
"Age": "26-35",
"Occupation": 17,
"City_Category": "C",
"Stay_In_Current_City_Years": "0",
"Marital_Status": 0,
"Product_Category_1": 3,
"Product_Category_2": 5,
"Product_Category_3": NaN
}
{
"User_ID": 1000010,
"Product_ID": "P00288442",
"Gender": "F",
"Age": "36-45",
"Occupation": 1,
"City_Category": "B",
"Stay_In_Current_City_Years": "4+",
"Marital_Status": 1,
"Product_Category_1": 5,
"Product_Category_2": 14,
"Product_Category_3": NaN
}
{
"User_ID": 1000010,
"Product_ID": "P00145342",
"Gender": "F",
"Age": "36-45",
"Occupation": 1,
"City_Category": "B",
"Stay_In_Current_City_Years": "4+",
"Marital_Status": 1,
"Product_Category_1": 4,
"Product_Category_2": 9,
"Product_Category_3": NaN
}
<jupyter_script>import pandas as pd
import numpy as np
# Black Friday is a shopper’s delight. It is a day full of special shopping deals and big discounts and is considered the beginning of the holiday shopping season. On the flipside, businesses look at this day as an opportunity to win new customers, boost sales, and increase profits. The name ‘Black Friday’ is a reference to the black ink which was used to record profits!(Unfortunately it's often the time of year when e-commerce fraud also peaks.)
# To make the most of the Black Friday sale, business owners can look at past data. That way, they can decide which products to offer at high discounts, which demographics to market to, and more.
df = pd.read_csv("/kaggle/input/black-friday/train.csv")
df_test = pd.read_csv("/kaggle/input/black-friday/test.csv")
# This CSV file contains 550,068 records about Black Friday purchases in a retail store. The dataset has the following attributes:
# User_ID : A unique value identifying each buyer
# Product_ID: A unique value identifying each product
# Gender : The gender of the buyer (M or F)
# Age: The buyer's age
# Occupation: The buyer's occupation, specified as a numeric value
# City_Category: The city category in which the purchase was made
# Stay_In_Current_City_Years: The number of years that a buyer has lived in their city
# Marital_Status: The marital status of the buyer. 0 denotes single, 1 denotes married.
# Product_Category_1: The main category of the product, specified as a number.
# Product_Category_2: The first subcategory of the product
# Product_Category_3: The second subcategory of the product
# Purchase: The amount spent by a user for a purchase, in dollars
df.info()
df.head()
# What is the average amount spent by buyers on Black Friday?
# What is the distribution of purchases by gender?
# Does age affect the amount spent on Black Friday?
# Which city category has the highest sales on Black Friday?
# What are the most popular product categories on Black Friday?
# Is there a difference in spending patterns between married and single buyers?
# Are certain occupations more likely to spend more on Black Friday?
# What is the correlation between the number of years a buyer has lived in their city and the amount spent on Black Friday?
# Is there a relationship between the product category and the amount spent?
# Are there any outliers or anomalies in the data that need to be investigated?
# What is the distribution of purchases by gender?
df["Gender"].value_counts().plot(kind="bar")
df.groupby("Gender")["Purchase"].mean().plot(kind="bar")
# Does age affect the amount spent on Black Friday?
df["Age"].unique()
grouping = df.groupby("Age")
grouping["Purchase"].mean().plot(kind="bar")
# What are the most popular product categories on Black Friday?
df["Product_ID"].nunique()
df.groupby("Product_ID")["Purchase"].mean().sort_values(ascending=False).head(5).plot(
kind="bar"
)
# Is there a difference in spending patterns between married and single buyers?
df["Marital_Status"].value_counts()
df.groupby("Marital_Status")["Purchase"].mean().plot(kind="bar")
# Are certain occupations more likely to spend more on Black Friday?
df["Occupation"].unique()
df.groupby("Occupation")["Purchase"].mean().plot(kind="bar")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/822/129822779.ipynb
|
black-friday
|
sdolezel
|
[{"Id": 129822779, "ScriptId": 38608454, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10915486, "CreationDate": "05/16/2023 18:12:14", "VersionNumber": 1.0, "Title": "Black Friday Complete analysis", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 97.0, "LinesInsertedFromPrevious": 97.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186200518, "KernelVersionId": 129822779, "SourceDatasetVersionId": 14692}]
|
[{"Id": 14692, "DatasetId": 10479, "DatasourceVersionId": 14692, "CreatorUserId": 932915, "LicenseName": "Unknown", "CreationDate": "01/21/2018 12:36:05", "VersionNumber": 1.0, "Title": "Black Friday", "Slug": "black-friday", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 7870870.0, "TotalUncompressedBytes": 7870870.0}]
|
[{"Id": 10479, "CreatorUserId": 932915, "OwnerUserId": 932915.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 14692.0, "CurrentDatasourceVersionId": 14692.0, "ForumId": 17842, "Type": 2, "CreationDate": "01/21/2018 12:36:05", "LastActivityDate": "01/30/2018", "TotalViews": 125104, "TotalDownloads": 29293, "TotalVotes": 209, "TotalKernels": 113}]
|
[{"Id": 932915, "UserName": "sdolezel", "DisplayName": "StefanDolezel", "RegisterDate": "02/27/2017", "PerformanceTier": 0}]
|
import pandas as pd
import numpy as np
# Black Friday is a shopper’s delight. It is a day full of special shopping deals and big discounts and is considered the beginning of the holiday shopping season. On the flipside, businesses look at this day as an opportunity to win new customers, boost sales, and increase profits. The name ‘Black Friday’ is a reference to the black ink which was used to record profits!(Unfortunately it's often the time of year when e-commerce fraud also peaks.)
# To make the most of the Black Friday sale, business owners can look at past data. That way, they can decide which products to offer at high discounts, which demographics to market to, and more.
df = pd.read_csv("/kaggle/input/black-friday/train.csv")
df_test = pd.read_csv("/kaggle/input/black-friday/test.csv")
# This CSV file contains 550,068 records about Black Friday purchases in a retail store. The dataset has the following attributes:
# User_ID : A unique value identifying each buyer
# Product_ID: A unique value identifying each product
# Gender : The gender of the buyer (M or F)
# Age: The buyer's age
# Occupation: The buyer's occupation, specified as a numeric value
# City_Category: The city category in which the purchase was made
# Stay_In_Current_City_Years: The number of years that a buyer has lived in their city
# Marital_Status: The marital status of the buyer. 0 denotes single, 1 denotes married.
# Product_Category_1: The main category of the product, specified as a number.
# Product_Category_2: The first subcategory of the product
# Product_Category_3: The second subcategory of the product
# Purchase: The amount spent by a user for a purchase, in dollars
df.info()
df.head()
# What is the average amount spent by buyers on Black Friday?
# What is the distribution of purchases by gender?
# Does age affect the amount spent on Black Friday?
# Which city category has the highest sales on Black Friday?
# What are the most popular product categories on Black Friday?
# Is there a difference in spending patterns between married and single buyers?
# Are certain occupations more likely to spend more on Black Friday?
# What is the correlation between the number of years a buyer has lived in their city and the amount spent on Black Friday?
# Is there a relationship between the product category and the amount spent?
# Are there any outliers or anomalies in the data that need to be investigated?
# What is the distribution of purchases by gender?
df["Gender"].value_counts().plot(kind="bar")
df.groupby("Gender")["Purchase"].mean().plot(kind="bar")
# Does age affect the amount spent on Black Friday?
df["Age"].unique()
grouping = df.groupby("Age")
grouping["Purchase"].mean().plot(kind="bar")
# What are the most popular product categories on Black Friday?
df["Product_ID"].nunique()
df.groupby("Product_ID")["Purchase"].mean().sort_values(ascending=False).head(5).plot(
kind="bar"
)
# Is there a difference in spending patterns between married and single buyers?
df["Marital_Status"].value_counts()
df.groupby("Marital_Status")["Purchase"].mean().plot(kind="bar")
# Are certain occupations more likely to spend more on Black Friday?
df["Occupation"].unique()
df.groupby("Occupation")["Purchase"].mean().plot(kind="bar")
|
[{"black-friday/train.csv": {"column_names": "[\"User_ID\", \"Product_ID\", \"Gender\", \"Age\", \"Occupation\", \"City_Category\", \"Stay_In_Current_City_Years\", \"Marital_Status\", \"Product_Category_1\", \"Product_Category_2\", \"Product_Category_3\", \"Purchase\"]", "column_data_types": "{\"User_ID\": \"int64\", \"Product_ID\": \"object\", \"Gender\": \"object\", \"Age\": \"object\", \"Occupation\": \"int64\", \"City_Category\": \"object\", \"Stay_In_Current_City_Years\": \"object\", \"Marital_Status\": \"int64\", \"Product_Category_1\": \"int64\", \"Product_Category_2\": \"float64\", \"Product_Category_3\": \"float64\", \"Purchase\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 550068 entries, 0 to 550067\nData columns (total 12 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 User_ID 550068 non-null int64 \n 1 Product_ID 550068 non-null object \n 2 Gender 550068 non-null object \n 3 Age 550068 non-null object \n 4 Occupation 550068 non-null int64 \n 5 City_Category 550068 non-null object \n 6 Stay_In_Current_City_Years 550068 non-null object \n 7 Marital_Status 550068 non-null int64 \n 8 Product_Category_1 550068 non-null int64 \n 9 Product_Category_2 376430 non-null float64\n 10 Product_Category_3 166821 non-null float64\n 11 Purchase 550068 non-null int64 \ndtypes: float64(2), int64(5), object(5)\nmemory usage: 50.4+ MB\n", "summary": "{\"User_ID\": {\"count\": 550068.0, \"mean\": 1003028.8424013031, \"std\": 1727.5915855305516, \"min\": 1000001.0, \"25%\": 1001516.0, \"50%\": 1003077.0, \"75%\": 1004478.0, \"max\": 1006040.0}, \"Occupation\": {\"count\": 550068.0, \"mean\": 8.076706879876669, \"std\": 6.522660487341824, \"min\": 0.0, \"25%\": 2.0, \"50%\": 7.0, \"75%\": 14.0, \"max\": 20.0}, \"Marital_Status\": {\"count\": 550068.0, \"mean\": 0.40965298835780306, \"std\": 0.49177012631733, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"Product_Category_1\": {\"count\": 550068.0, \"mean\": 5.404270017525106, \"std\": 3.936211369201389, \"min\": 1.0, \"25%\": 1.0, \"50%\": 5.0, \"75%\": 8.0, \"max\": 20.0}, \"Product_Category_2\": {\"count\": 376430.0, \"mean\": 9.842329251122386, \"std\": 5.086589648693479, \"min\": 2.0, \"25%\": 5.0, \"50%\": 9.0, \"75%\": 15.0, \"max\": 18.0}, \"Product_Category_3\": {\"count\": 166821.0, \"mean\": 12.668243206790512, \"std\": 4.125337631575282, \"min\": 3.0, \"25%\": 9.0, \"50%\": 14.0, \"75%\": 16.0, \"max\": 18.0}, \"Purchase\": {\"count\": 550068.0, \"mean\": 9263.968712959126, \"std\": 5023.065393820582, \"min\": 12.0, \"25%\": 5823.0, \"50%\": 8047.0, \"75%\": 12054.0, \"max\": 23961.0}}", "examples": "{\"User_ID\":{\"0\":1000001,\"1\":1000001,\"2\":1000001,\"3\":1000001},\"Product_ID\":{\"0\":\"P00069042\",\"1\":\"P00248942\",\"2\":\"P00087842\",\"3\":\"P00085442\"},\"Gender\":{\"0\":\"F\",\"1\":\"F\",\"2\":\"F\",\"3\":\"F\"},\"Age\":{\"0\":\"0-17\",\"1\":\"0-17\",\"2\":\"0-17\",\"3\":\"0-17\"},\"Occupation\":{\"0\":10,\"1\":10,\"2\":10,\"3\":10},\"City_Category\":{\"0\":\"A\",\"1\":\"A\",\"2\":\"A\",\"3\":\"A\"},\"Stay_In_Current_City_Years\":{\"0\":\"2\",\"1\":\"2\",\"2\":\"2\",\"3\":\"2\"},\"Marital_Status\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"Product_Category_1\":{\"0\":3,\"1\":1,\"2\":12,\"3\":12},\"Product_Category_2\":{\"0\":null,\"1\":6.0,\"2\":null,\"3\":14.0},\"Product_Category_3\":{\"0\":null,\"1\":14.0,\"2\":null,\"3\":null},\"Purchase\":{\"0\":8370,\"1\":15200,\"2\":1422,\"3\":1057}}"}}, {"black-friday/test.csv": {"column_names": "[\"User_ID\", \"Product_ID\", \"Gender\", \"Age\", \"Occupation\", \"City_Category\", \"Stay_In_Current_City_Years\", \"Marital_Status\", \"Product_Category_1\", \"Product_Category_2\", \"Product_Category_3\"]", "column_data_types": "{\"User_ID\": \"int64\", \"Product_ID\": \"object\", \"Gender\": \"object\", \"Age\": \"object\", \"Occupation\": \"int64\", \"City_Category\": \"object\", \"Stay_In_Current_City_Years\": \"object\", \"Marital_Status\": \"int64\", \"Product_Category_1\": \"int64\", \"Product_Category_2\": \"float64\", \"Product_Category_3\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 233599 entries, 0 to 233598\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 User_ID 233599 non-null int64 \n 1 Product_ID 233599 non-null object \n 2 Gender 233599 non-null object \n 3 Age 233599 non-null object \n 4 Occupation 233599 non-null int64 \n 5 City_Category 233599 non-null object \n 6 Stay_In_Current_City_Years 233599 non-null object \n 7 Marital_Status 233599 non-null int64 \n 8 Product_Category_1 233599 non-null int64 \n 9 Product_Category_2 161255 non-null float64\n 10 Product_Category_3 71037 non-null float64\ndtypes: float64(2), int64(4), object(5)\nmemory usage: 19.6+ MB\n", "summary": "{\"User_ID\": {\"count\": 233599.0, \"mean\": 1003029.3568594044, \"std\": 1726.5049679955312, \"min\": 1000001.0, \"25%\": 1001527.0, \"50%\": 1003070.0, \"75%\": 1004477.0, \"max\": 1006040.0}, \"Occupation\": {\"count\": 233599.0, \"mean\": 8.085407043694536, \"std\": 6.521146481494521, \"min\": 0.0, \"25%\": 2.0, \"50%\": 7.0, \"75%\": 14.0, \"max\": 20.0}, \"Marital_Status\": {\"count\": 233599.0, \"mean\": 0.4100702485883929, \"std\": 0.49184720737729476, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"Product_Category_1\": {\"count\": 233599.0, \"mean\": 5.276542279718663, \"std\": 3.7363801122656355, \"min\": 1.0, \"25%\": 1.0, \"50%\": 5.0, \"75%\": 8.0, \"max\": 18.0}, \"Product_Category_2\": {\"count\": 161255.0, \"mean\": 9.849586059346997, \"std\": 5.094942849775034, \"min\": 2.0, \"25%\": 5.0, \"50%\": 9.0, \"75%\": 15.0, \"max\": 18.0}, \"Product_Category_3\": {\"count\": 71037.0, \"mean\": 12.669453946534905, \"std\": 4.125944373515683, \"min\": 3.0, \"25%\": 9.0, \"50%\": 14.0, \"75%\": 16.0, \"max\": 18.0}}", "examples": "{\"User_ID\":{\"0\":1000004,\"1\":1000009,\"2\":1000010,\"3\":1000010},\"Product_ID\":{\"0\":\"P00128942\",\"1\":\"P00113442\",\"2\":\"P00288442\",\"3\":\"P00145342\"},\"Gender\":{\"0\":\"M\",\"1\":\"M\",\"2\":\"F\",\"3\":\"F\"},\"Age\":{\"0\":\"46-50\",\"1\":\"26-35\",\"2\":\"36-45\",\"3\":\"36-45\"},\"Occupation\":{\"0\":7,\"1\":17,\"2\":1,\"3\":1},\"City_Category\":{\"0\":\"B\",\"1\":\"C\",\"2\":\"B\",\"3\":\"B\"},\"Stay_In_Current_City_Years\":{\"0\":\"2\",\"1\":\"0\",\"2\":\"4+\",\"3\":\"4+\"},\"Marital_Status\":{\"0\":1,\"1\":0,\"2\":1,\"3\":1},\"Product_Category_1\":{\"0\":1,\"1\":3,\"2\":5,\"3\":4},\"Product_Category_2\":{\"0\":11.0,\"1\":5.0,\"2\":14.0,\"3\":9.0},\"Product_Category_3\":{\"0\":null,\"1\":null,\"2\":null,\"3\":null}}"}}]
| true | 2 |
<start_data_description><data_path>black-friday/train.csv:
<column_names>
['User_ID', 'Product_ID', 'Gender', 'Age', 'Occupation', 'City_Category', 'Stay_In_Current_City_Years', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3', 'Purchase']
<column_types>
{'User_ID': 'int64', 'Product_ID': 'object', 'Gender': 'object', 'Age': 'object', 'Occupation': 'int64', 'City_Category': 'object', 'Stay_In_Current_City_Years': 'object', 'Marital_Status': 'int64', 'Product_Category_1': 'int64', 'Product_Category_2': 'float64', 'Product_Category_3': 'float64', 'Purchase': 'int64'}
<dataframe_Summary>
{'User_ID': {'count': 550068.0, 'mean': 1003028.8424013031, 'std': 1727.5915855305516, 'min': 1000001.0, '25%': 1001516.0, '50%': 1003077.0, '75%': 1004478.0, 'max': 1006040.0}, 'Occupation': {'count': 550068.0, 'mean': 8.076706879876669, 'std': 6.522660487341824, 'min': 0.0, '25%': 2.0, '50%': 7.0, '75%': 14.0, 'max': 20.0}, 'Marital_Status': {'count': 550068.0, 'mean': 0.40965298835780306, 'std': 0.49177012631733, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'Product_Category_1': {'count': 550068.0, 'mean': 5.404270017525106, 'std': 3.936211369201389, 'min': 1.0, '25%': 1.0, '50%': 5.0, '75%': 8.0, 'max': 20.0}, 'Product_Category_2': {'count': 376430.0, 'mean': 9.842329251122386, 'std': 5.086589648693479, 'min': 2.0, '25%': 5.0, '50%': 9.0, '75%': 15.0, 'max': 18.0}, 'Product_Category_3': {'count': 166821.0, 'mean': 12.668243206790512, 'std': 4.125337631575282, 'min': 3.0, '25%': 9.0, '50%': 14.0, '75%': 16.0, 'max': 18.0}, 'Purchase': {'count': 550068.0, 'mean': 9263.968712959126, 'std': 5023.065393820582, 'min': 12.0, '25%': 5823.0, '50%': 8047.0, '75%': 12054.0, 'max': 23961.0}}
<dataframe_info>
RangeIndex: 550068 entries, 0 to 550067
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 User_ID 550068 non-null int64
1 Product_ID 550068 non-null object
2 Gender 550068 non-null object
3 Age 550068 non-null object
4 Occupation 550068 non-null int64
5 City_Category 550068 non-null object
6 Stay_In_Current_City_Years 550068 non-null object
7 Marital_Status 550068 non-null int64
8 Product_Category_1 550068 non-null int64
9 Product_Category_2 376430 non-null float64
10 Product_Category_3 166821 non-null float64
11 Purchase 550068 non-null int64
dtypes: float64(2), int64(5), object(5)
memory usage: 50.4+ MB
<some_examples>
{'User_ID': {'0': 1000001, '1': 1000001, '2': 1000001, '3': 1000001}, 'Product_ID': {'0': 'P00069042', '1': 'P00248942', '2': 'P00087842', '3': 'P00085442'}, 'Gender': {'0': 'F', '1': 'F', '2': 'F', '3': 'F'}, 'Age': {'0': '0-17', '1': '0-17', '2': '0-17', '3': '0-17'}, 'Occupation': {'0': 10, '1': 10, '2': 10, '3': 10}, 'City_Category': {'0': 'A', '1': 'A', '2': 'A', '3': 'A'}, 'Stay_In_Current_City_Years': {'0': '2', '1': '2', '2': '2', '3': '2'}, 'Marital_Status': {'0': 0, '1': 0, '2': 0, '3': 0}, 'Product_Category_1': {'0': 3, '1': 1, '2': 12, '3': 12}, 'Product_Category_2': {'0': None, '1': 6.0, '2': None, '3': 14.0}, 'Product_Category_3': {'0': None, '1': 14.0, '2': None, '3': None}, 'Purchase': {'0': 8370, '1': 15200, '2': 1422, '3': 1057}}
<end_description>
<start_data_description><data_path>black-friday/test.csv:
<column_names>
['User_ID', 'Product_ID', 'Gender', 'Age', 'Occupation', 'City_Category', 'Stay_In_Current_City_Years', 'Marital_Status', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3']
<column_types>
{'User_ID': 'int64', 'Product_ID': 'object', 'Gender': 'object', 'Age': 'object', 'Occupation': 'int64', 'City_Category': 'object', 'Stay_In_Current_City_Years': 'object', 'Marital_Status': 'int64', 'Product_Category_1': 'int64', 'Product_Category_2': 'float64', 'Product_Category_3': 'float64'}
<dataframe_Summary>
{'User_ID': {'count': 233599.0, 'mean': 1003029.3568594044, 'std': 1726.5049679955312, 'min': 1000001.0, '25%': 1001527.0, '50%': 1003070.0, '75%': 1004477.0, 'max': 1006040.0}, 'Occupation': {'count': 233599.0, 'mean': 8.085407043694536, 'std': 6.521146481494521, 'min': 0.0, '25%': 2.0, '50%': 7.0, '75%': 14.0, 'max': 20.0}, 'Marital_Status': {'count': 233599.0, 'mean': 0.4100702485883929, 'std': 0.49184720737729476, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'Product_Category_1': {'count': 233599.0, 'mean': 5.276542279718663, 'std': 3.7363801122656355, 'min': 1.0, '25%': 1.0, '50%': 5.0, '75%': 8.0, 'max': 18.0}, 'Product_Category_2': {'count': 161255.0, 'mean': 9.849586059346997, 'std': 5.094942849775034, 'min': 2.0, '25%': 5.0, '50%': 9.0, '75%': 15.0, 'max': 18.0}, 'Product_Category_3': {'count': 71037.0, 'mean': 12.669453946534905, 'std': 4.125944373515683, 'min': 3.0, '25%': 9.0, '50%': 14.0, '75%': 16.0, 'max': 18.0}}
<dataframe_info>
RangeIndex: 233599 entries, 0 to 233598
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 User_ID 233599 non-null int64
1 Product_ID 233599 non-null object
2 Gender 233599 non-null object
3 Age 233599 non-null object
4 Occupation 233599 non-null int64
5 City_Category 233599 non-null object
6 Stay_In_Current_City_Years 233599 non-null object
7 Marital_Status 233599 non-null int64
8 Product_Category_1 233599 non-null int64
9 Product_Category_2 161255 non-null float64
10 Product_Category_3 71037 non-null float64
dtypes: float64(2), int64(4), object(5)
memory usage: 19.6+ MB
<some_examples>
{'User_ID': {'0': 1000004, '1': 1000009, '2': 1000010, '3': 1000010}, 'Product_ID': {'0': 'P00128942', '1': 'P00113442', '2': 'P00288442', '3': 'P00145342'}, 'Gender': {'0': 'M', '1': 'M', '2': 'F', '3': 'F'}, 'Age': {'0': '46-50', '1': '26-35', '2': '36-45', '3': '36-45'}, 'Occupation': {'0': 7, '1': 17, '2': 1, '3': 1}, 'City_Category': {'0': 'B', '1': 'C', '2': 'B', '3': 'B'}, 'Stay_In_Current_City_Years': {'0': '2', '1': '0', '2': '4+', '3': '4+'}, 'Marital_Status': {'0': 1, '1': 0, '2': 1, '3': 1}, 'Product_Category_1': {'0': 1, '1': 3, '2': 5, '3': 4}, 'Product_Category_2': {'0': 11.0, '1': 5.0, '2': 14.0, '3': 9.0}, 'Product_Category_3': {'0': None, '1': None, '2': None, '3': None}}
<end_description>
| 875 | 0 | 2,696 | 875 |
129822290
|
<jupyter_start><jupyter_text>Speech Emotion Recognition (en)
### Context
Speech is the most natural way of expressing ourselves as humans. It is only natural then to extend this communication medium to computer applications. We define speech emotion recognition (SER) systems as a collection of methodologies that process and classify speech signals to detect the embedded emotions. SER is not a new field, it has been around for over two decades, and has regained attention thanks to the recent advancements. These novel studies make use of the advances in all fields of computing and technology, making it necessary to have an update on the current methodologies and techniques that make SER possible. We have identified and discussed distinct areas of SER, provided a detailed survey of current literature of each, and also listed the current challenges.
### Content
Here 4 most popular datasets in English: Crema, Ravdess, Savee and Tess. Each of them contains audio in .wav format with some main labels.
**Ravdess:**
Here is the filename identifiers as per the official RAVDESS website:
* Modality (01 = full-AV, 02 = video-only, 03 = audio-only).
* Vocal channel (01 = speech, 02 = song).
* Emotion (01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised).
* Emotional intensity (01 = normal, 02 = strong). NOTE: There is no strong intensity for the 'neutral' emotion.
* Statement (01 = "Kids are talking by the door", 02 = "Dogs are sitting by the door").
* Repetition (01 = 1st repetition, 02 = 2nd repetition).
* Actor (01 to 24. Odd numbered actors are male, even numbered actors are female).
So, here's an example of an audio filename. 02-01-06-01-02-01-12.wav This means the meta data for the audio file is:
* Video-only (02)
* Speech (01)
* Fearful (06)
* Normal intensity (01)
* Statement "dogs" (02)
* 1st Repetition (01)
* 12th Actor (12) - Female (as the actor ID number is even)
**Crema:**
The third component is responsible for the emotion label:
* SAD - sadness;
* ANG - angry;
* DIS - disgust;
* FEA - fear;
* HAP - happy;
* NEU - neutral.
**Tess:**
Very similar to Crema - label of emotion is contained in the name of file.
**Savee:**
The audio files in this dataset are named in such a way that the prefix letters describes the emotion classes as follows:
* 'a' = 'anger'
* 'd' = 'disgust'
* 'f' = 'fear'
* 'h' = 'happiness'
* 'n' = 'neutral'
* 'sa' = 'sadness'
* 'su' = 'surprise'
Kaggle dataset identifier: speech-emotion-recognition-en
<jupyter_script># # Importing and loading Dependencies
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from matplotlib import pyplot as plt
import tensorflow as tf
import tensorflow_io as tfio
import os
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Loading the data set into a tensorflow dataset
# Extracting the label or class of the example
def extract_class(file_path):
file_name = os.path.basename(file_path)
class_name = file_name.split("_")[2]
return class_name
# loading the data into a tensor
def load_wave(filename):
# Load encoded wav file
file_contents = tf.io.read_file(filename)
# Decode wav (tensors by channels)
wav, sample_rate = tf.audio.decode_wav(file_contents, desired_channels=1)
# Removes trailing axis
wav = tf.squeeze(wav, axis=-1)
sample_rate = tf.cast(sample_rate, dtype=tf.int64)
# Goes from 44100Hz to 16000hz - amplitude of the audio signal
wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000)
return wav
# Creating a list of all file paths
def create_path(directory):
paths = os.listdir(directory)
final_paths = []
for path in paths:
final_paths.append(directory + "/" + path)
return final_paths
# plotting the waveform of the sound
def plot_waveform(waveform):
plt.figure()
plt.plot(waveform.numpy())
plt.xlabel("Time")
plt.ylabel("Amplitude")
plt.title("Waveform")
plt.show()
# # SAD - sadness;
# # ANG - angry;
# # DIS - disgust;
# # EA - fear;
# # HAP - happy;
# # NEU - neutral;
# # These correspont to all the emotion classes in the dataset, we will need
file_paths = create_path("/kaggle/input/speech-emotion-recognition-en/Crema")
data_path = []
label_list = []
for x in file_paths:
data_path.append("/kaggle/input/speech-emotion-recognition-en/Crema" + "/" + x)
label_list.append(extract_class(x))
data = pd.DataFrame()
data["path"] = data_path
data["label"] = label_list
path_dataset = tf.data.Dataset.from_tensor_slices(
(np.array(data["path"]), np.array(data["label"]))
)
path_dataset.as_numpy_iterator().next()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/822/129822290.ipynb
|
speech-emotion-recognition-en
|
dmitrybabko
|
[{"Id": 129822290, "ScriptId": 38402323, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10431776, "CreationDate": "05/16/2023 18:06:54", "VersionNumber": 1.0, "Title": "Speech Emotion Recognition", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 85.0, "LinesInsertedFromPrevious": 85.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186199780, "KernelVersionId": 129822290, "SourceDatasetVersionId": 1877714}]
|
[{"Id": 1877714, "DatasetId": 1118008, "DatasourceVersionId": 1915800, "CreatorUserId": 5728736, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "01/25/2021 12:59:50", "VersionNumber": 1.0, "Title": "Speech Emotion Recognition (en)", "Slug": "speech-emotion-recognition-en", "Subtitle": "Contains 4 most popular datasets: Crema, Savee, Tess, Ravee", "Description": "### Context\n\nSpeech is the most natural way of expressing ourselves as humans. It is only natural then to extend this communication medium to computer applications. We define speech emotion recognition (SER) systems as a collection of methodologies that process and classify speech signals to detect the embedded emotions. SER is not a new field, it has been around for over two decades, and has regained attention thanks to the recent advancements. These novel studies make use of the advances in all fields of computing and technology, making it necessary to have an update on the current methodologies and techniques that make SER possible. We have identified and discussed distinct areas of SER, provided a detailed survey of current literature of each, and also listed the current challenges.\n\n### Content\n\nHere 4 most popular datasets in English: Crema, Ravdess, Savee and Tess. Each of them contains audio in .wav format with some main labels.\n\n**Ravdess:**\n\nHere is the filename identifiers as per the official RAVDESS website:\n\n* Modality (01 = full-AV, 02 = video-only, 03 = audio-only).\n* Vocal channel (01 = speech, 02 = song).\n* Emotion (01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised).\n* Emotional intensity (01 = normal, 02 = strong). NOTE: There is no strong intensity for the 'neutral' emotion.\n* Statement (01 = \"Kids are talking by the door\", 02 = \"Dogs are sitting by the door\").\n* Repetition (01 = 1st repetition, 02 = 2nd repetition).\n* Actor (01 to 24. Odd numbered actors are male, even numbered actors are female).\n\nSo, here's an example of an audio filename. 02-01-06-01-02-01-12.wav This means the meta data for the audio file is:\n\n* Video-only (02)\n* Speech (01)\n* Fearful (06)\n* Normal intensity (01)\n* Statement \"dogs\" (02)\n* 1st Repetition (01)\n* 12th Actor (12) - Female (as the actor ID number is even)\n\n**Crema:**\n\nThe third component is responsible for the emotion label:\n* SAD - sadness;\n* ANG - angry; \n* DIS - disgust;\n* FEA - fear; \n* HAP - happy;\n* NEU - neutral.\n\n**Tess:**\n\nVery similar to Crema - label of emotion is contained in the name of file.\n\n**Savee:**\n\nThe audio files in this dataset are named in such a way that the prefix letters describes the emotion classes as follows:\n\n* 'a' = 'anger'\n* 'd' = 'disgust'\n* 'f' = 'fear'\n* 'h' = 'happiness'\n* 'n' = 'neutral'\n* 'sa' = 'sadness'\n* 'su' = 'surprise'\n\n### Acknowledgements\n\nMy pleasure to show you a [notebook](https://www.kaggle.com/shivamburnwal/speech-emotion-recognition/comments) of this guy which inspire me to contain this dataset publicly.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1118008, "CreatorUserId": 5728736, "OwnerUserId": 5728736.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1877714.0, "CurrentDatasourceVersionId": 1915800.0, "ForumId": 1135358, "Type": 2, "CreationDate": "01/25/2021 12:59:50", "LastActivityDate": "01/25/2021", "TotalViews": 41374, "TotalDownloads": 6801, "TotalVotes": 79, "TotalKernels": 43}]
|
[{"Id": 5728736, "UserName": "dmitrybabko", "DisplayName": "Dmitry Babko", "RegisterDate": "09/06/2020", "PerformanceTier": 1}]
|
# # Importing and loading Dependencies
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from matplotlib import pyplot as plt
import tensorflow as tf
import tensorflow_io as tfio
import os
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Loading the data set into a tensorflow dataset
# Extracting the label or class of the example
def extract_class(file_path):
file_name = os.path.basename(file_path)
class_name = file_name.split("_")[2]
return class_name
# loading the data into a tensor
def load_wave(filename):
# Load encoded wav file
file_contents = tf.io.read_file(filename)
# Decode wav (tensors by channels)
wav, sample_rate = tf.audio.decode_wav(file_contents, desired_channels=1)
# Removes trailing axis
wav = tf.squeeze(wav, axis=-1)
sample_rate = tf.cast(sample_rate, dtype=tf.int64)
# Goes from 44100Hz to 16000hz - amplitude of the audio signal
wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000)
return wav
# Creating a list of all file paths
def create_path(directory):
paths = os.listdir(directory)
final_paths = []
for path in paths:
final_paths.append(directory + "/" + path)
return final_paths
# plotting the waveform of the sound
def plot_waveform(waveform):
plt.figure()
plt.plot(waveform.numpy())
plt.xlabel("Time")
plt.ylabel("Amplitude")
plt.title("Waveform")
plt.show()
# # SAD - sadness;
# # ANG - angry;
# # DIS - disgust;
# # EA - fear;
# # HAP - happy;
# # NEU - neutral;
# # These correspont to all the emotion classes in the dataset, we will need
file_paths = create_path("/kaggle/input/speech-emotion-recognition-en/Crema")
data_path = []
label_list = []
for x in file_paths:
data_path.append("/kaggle/input/speech-emotion-recognition-en/Crema" + "/" + x)
label_list.append(extract_class(x))
data = pd.DataFrame()
data["path"] = data_path
data["label"] = label_list
path_dataset = tf.data.Dataset.from_tensor_slices(
(np.array(data["path"]), np.array(data["label"]))
)
path_dataset.as_numpy_iterator().next()
| false | 0 | 761 | 0 | 1,545 | 761 |
||
129478589
|
<jupyter_start><jupyter_text>IPL 2022-Dataset
#Complete Match-wise data of IPL 2022
#MAKE AN EDA with this dataset
*******Content*****
This dataset contains Matchwise data of IPL matches 2022 (March26 - May29 2022) ,The complete data of group stage matches.
****Attribute Information****
1. Match Number
2. Date of the match
3. Name of Venue
4. Playing team 1
5. Playing team 2
6. Stage of the tournament
7. Toss winning team
8. Decision of toss winning team
9. First innings score
10. First innings wickets
11. Second innings score
12. Second innings wickets
13. Winning Team
14. Specify whether won by runs or wickets
15. Winning Margin
16. Player of the match (Best Performance)
17. Top scoring Batter in the match
18. Highscore in the match (Highest Individual score from both teams)
19. Bowler who has best bowling figure in the match
(if two or more bowlers has the same bowling figure ,bowler who takes more wickets from less number of overs is selected)
20. Best bowling Figure in the match
(if two bowlers has the same bowling figure ,bowler who takes more wickets from less number of overs is selected)
******Acknowledgement******
Data Source credit : https://www.espncricinfo.com/
## Please appreciate the effort with an upvote 👍 ,if you found this useful..
*Stay Tuned for Updated Versions of this Dataset*
****THANK YOU****
Kaggle dataset identifier: ipl-2022dataset
<jupyter_script>import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
data = pd.read_csv("/kaggle/input/ipl-2022dataset/Book_ipl22_ver_33.csv")
print(data.head())
figure = px.bar(data, x=data["match_winner"], title="Number of Matches Won in IPL 2022")
figure.show()
data["won_by"] = data["won_by"].map({"Wickets": "Chasing", "Runs": "Defending"})
won_by = data["won_by"].value_counts()
label = won_by.index
counts = won_by.values
colors = ["gold", "lightgreen"]
fig = go.Figure(data=[go.Pie(labels=label, values=counts)])
fig.update_layout(title_text="Number of Matches Won By Defending Or Chasing")
fig.update_traces(
hoverinfo="label+percent",
textinfo="value",
textfont_size=30,
marker=dict(colors=colors, line=dict(color="black", width=3)),
)
fig.show()
toss = data["toss_decision"].value_counts()
label = toss.index
counts = toss.values
colors = ["skyblue", "yellow"]
fig = go.Figure(data=[go.Pie(labels=label, values=counts)])
fig.update_layout(title_text="Toss Decision")
fig.update_traces(
hoverinfo="label+percent",
textinfo="value",
textfont_size=30,
marker=dict(colors=colors, line=dict(color="black", width=3)),
)
fig.show()
figure = px.bar(data, x=data["top_scorer"], title="Top Scorers in IPL 2022")
figure.show()
figure = px.bar(
data,
x=data["top_scorer"],
y=data["highscore"],
color=data["highscore"],
title="Top Scorers in IPL 2022",
)
figure.show()
figure = px.bar(
data, x=data["player_of_the_match"], title="Most Player of the Match Awards"
)
figure.show()
figure = px.bar(data, x=data["best_bowling"], title="Best Bowlers in IPL 2022")
figure.show()
figure = go.Figure()
figure.add_trace(
go.Bar(
x=data["venue"],
y=data["first_ings_wkts"],
name="First Innings Wickets",
marker_color="gold",
)
)
figure.add_trace(
go.Bar(
x=data["venue"],
y=data["second_ings_wkts"],
name="Second Innings Wickets",
marker_color="lightgreen",
)
)
figure.update_layout(barmode="group", xaxis_tickangle=-45)
figure.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/478/129478589.ipynb
|
ipl-2022dataset
|
aravindas01
|
[{"Id": 129478589, "ScriptId": 38500206, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6012166, "CreationDate": "05/14/2023 06:54:39", "VersionNumber": 1.0, "Title": "IPL 2022 Analysis", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 78.0, "LinesInsertedFromPrevious": 78.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185573934, "KernelVersionId": 129478589, "SourceDatasetVersionId": 3720746}]
|
[{"Id": 3720746, "DatasetId": 2065244, "DatasourceVersionId": 3775080, "CreatorUserId": 9042531, "LicenseName": "Database: Open Database, Contents: \u00a9 Original Authors", "CreationDate": "05/30/2022 17:09:22", "VersionNumber": 33.0, "Title": "IPL 2022-Dataset", "Slug": "ipl-2022dataset", "Subtitle": "Matchwise data of Tata IPL 2022 ( IPL Season 15)", "Description": "#Complete Match-wise data of IPL 2022\n\n#MAKE AN EDA with this dataset\n\n\n*******Content*****\nThis dataset contains Matchwise data of IPL matches 2022 (March26 - May29 2022) ,The complete data of group stage matches.\n\n****Attribute Information****\n1. Match Number\n2. Date of the match\n3. Name of Venue\n4. Playing team 1\n5. Playing team 2\n6. Stage of the tournament\n7. Toss winning team\n8. Decision of toss winning team\n9. First innings score\n10. First innings wickets\n11. Second innings score\n12. Second innings wickets\n13. Winning Team\n14. Specify whether won by runs or wickets\n15. Winning Margin\n16. Player of the match (Best Performance)\n17. Top scoring Batter in the match\n18. Highscore in the match (Highest Individual score from both teams)\n19. Bowler who has best bowling figure in the match\n(if two or more bowlers has the same bowling figure ,bowler who takes more wickets from less number of overs is selected)\n20. Best bowling Figure in the match\n(if two bowlers has the same bowling figure ,bowler who takes more wickets from less number of overs is selected)\n\n******Acknowledgement******\nData Source credit : https://www.espncricinfo.com/\n\n## Please appreciate the effort with an upvote \ud83d\udc4d ,if you found this useful..\n\n*Stay Tuned for Updated Versions of this Dataset*\n\n****THANK YOU****", "VersionNotes": "Data Update 2022/05/30", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2065244, "CreatorUserId": 9042531, "OwnerUserId": 9042531.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3720746.0, "CurrentDatasourceVersionId": 3775080.0, "ForumId": 2090391, "Type": 2, "CreationDate": "04/08/2022 10:26:43", "LastActivityDate": "04/08/2022", "TotalViews": 10772, "TotalDownloads": 2115, "TotalVotes": 66, "TotalKernels": 5}]
|
[{"Id": 9042531, "UserName": "aravindas01", "DisplayName": "Aravind A S", "RegisterDate": "11/30/2021", "PerformanceTier": 3}]
|
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
data = pd.read_csv("/kaggle/input/ipl-2022dataset/Book_ipl22_ver_33.csv")
print(data.head())
figure = px.bar(data, x=data["match_winner"], title="Number of Matches Won in IPL 2022")
figure.show()
data["won_by"] = data["won_by"].map({"Wickets": "Chasing", "Runs": "Defending"})
won_by = data["won_by"].value_counts()
label = won_by.index
counts = won_by.values
colors = ["gold", "lightgreen"]
fig = go.Figure(data=[go.Pie(labels=label, values=counts)])
fig.update_layout(title_text="Number of Matches Won By Defending Or Chasing")
fig.update_traces(
hoverinfo="label+percent",
textinfo="value",
textfont_size=30,
marker=dict(colors=colors, line=dict(color="black", width=3)),
)
fig.show()
toss = data["toss_decision"].value_counts()
label = toss.index
counts = toss.values
colors = ["skyblue", "yellow"]
fig = go.Figure(data=[go.Pie(labels=label, values=counts)])
fig.update_layout(title_text="Toss Decision")
fig.update_traces(
hoverinfo="label+percent",
textinfo="value",
textfont_size=30,
marker=dict(colors=colors, line=dict(color="black", width=3)),
)
fig.show()
figure = px.bar(data, x=data["top_scorer"], title="Top Scorers in IPL 2022")
figure.show()
figure = px.bar(
data,
x=data["top_scorer"],
y=data["highscore"],
color=data["highscore"],
title="Top Scorers in IPL 2022",
)
figure.show()
figure = px.bar(
data, x=data["player_of_the_match"], title="Most Player of the Match Awards"
)
figure.show()
figure = px.bar(data, x=data["best_bowling"], title="Best Bowlers in IPL 2022")
figure.show()
figure = go.Figure()
figure.add_trace(
go.Bar(
x=data["venue"],
y=data["first_ings_wkts"],
name="First Innings Wickets",
marker_color="gold",
)
)
figure.add_trace(
go.Bar(
x=data["venue"],
y=data["second_ings_wkts"],
name="Second Innings Wickets",
marker_color="lightgreen",
)
)
figure.update_layout(barmode="group", xaxis_tickangle=-45)
figure.show()
| false | 1 | 765 | 0 | 1,194 | 765 |
||
129478099
|
<jupyter_start><jupyter_text>Chatbot Dataset Topical Chat
This is a Topical Chat dataset from Amazon! It consists of over 8000 conversations and over 184000 messages!
Within each message, there is: A conversation id, which is basically which conversation the message takes place in. Each message is either the start of a conversation or a reply from the previous message. There is also a sentiment, which represents the emotion that the person who sent the message is feeling. There are 8 sentiments: Angry, Curious to Dive Deeper, Disguised, Fearful, Happy, Sad, and Surprised.
This dataset can be used in machine learning to simulate a conversation or to make a chatbot. It can also be used for data visualization, for example you could visualize the word usage for the different emotions.
PS: If you cannot download the dataset, download it from here:
https://docs.google.com/spreadsheets/d/1dFdlvgmyXfN3SriVn5Byv_BNtyroICxdgrQKBzuMA1U/edit?usp=sharing
Original github dataset:
https://github.com/alexa/Topical-Chat
Kaggle dataset identifier: chatbot-dataset-topical-chat
<jupyter_script># ## About
# Recent times has seen the invent of various Large Language models (LLMs) like ChatGPT, BARD, LLAMA.
# **Task**
# You are to develop a chat model, where the LLM can be trained with a conversation chat history of 2 users in any format you like. Ideally, the chat conversation should be over 5000 messages long, however, you may export the chat using your personal whatsapp and please train the chat model accordingly. Once the LLM is trained, one person should be able to chat with the LLM chat (representing person 2) with LLM reasonably detecting the style and tone of the conversation and replying back as person 2 (without the involvement of person
# 2).
# ### Data
# #### Importing Libraries and Installing Dependencies
# #### Downloading Topical Chat Data
# https://www.kaggle.com/datasets/arnavsharmaas/chatbot-dataset-topical-chat
import pandas as pd
import nltk
nltk.download("punkt")
filename = pd.read_csv("/kaggle/input/chatbot-dataset-topical-chat/topical_chat.csv")
df = filename.copy()
df.shape
df.head()
# #### Preprocessing Data
# Remove the first two columns, as they contain irrelevant information
df.drop("sentiment", inplace=True, axis=1)
# Merge all the text messages for each conversation into a single string
df1 = (
df.groupby("conversation_id")["message"].apply(lambda x: " ".join(x)).reset_index()
)
df1
# Tokenize the text
df1["tokenized_text"] = df1["message"].apply(nltk.word_tokenize)
df1
# Remove any non-alphabetic characters
df1["tokenized_text"] = df1["tokenized_text"].apply(
lambda x: [word.lower() for word in x if word.isalpha()]
)
df1
# Remove stop words
nltk.download("stopwords")
stopwords = nltk.corpus.stopwords.words("english")
df1["tokenized_text"] = df1["tokenized_text"].apply(
lambda x: [word for word in x if word not in stopwords]
)
df1
# Join the tokens back into a single string
df1["processed_text"] = df1["tokenized_text"].apply(lambda x: " ".join(x))
df1
# Save the preprocessed data to a new CSV file
df1.to_csv("processed_corpus.csv", index=False)
# #### Training the LLM
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load the tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Load the model
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Load the processed dataset
with open("/kaggle/working/processed_corpus.csv", "r", encoding="utf-8") as f:
corpus = f.read()
# Tokenize the dataset
inputs = tokenizer(corpus, return_tensors="pt")
inputs
# Train the model
outputs = model(inputs["input_ids"], labels=inputs["input_ids"])
outputs
# Save the trained model
torch.save(model.state_dict(), "trained_model.pth")
# #### Generating Responses
# Load the trained model
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.load_state_dict(torch.load("trained_model.pth"))
# Set the model to eval mode
model.eval()
# Load the tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Define a function to generate responses
def generate_response(prompt, model, tokenizer):
# Encode the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate the response
output = model.generate(
input_ids=input_ids, max_length=1000, do_sample=True, top_p=0.92, top_k=50
)
# Decode the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
return response
# Example usage
prompt = "Do you like dance?"
response = generate_response(prompt, model, tokenizer)
print(response)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/478/129478099.ipynb
|
chatbot-dataset-topical-chat
|
arnavsharmaas
|
[{"Id": 129478099, "ScriptId": 38497778, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4672638, "CreationDate": "05/14/2023 06:49:01", "VersionNumber": 1.0, "Title": "ATG_Internship_AI Shortlisting Task - 1", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 117.0, "LinesInsertedFromPrevious": 117.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185572826, "KernelVersionId": 129478099, "SourceDatasetVersionId": 1765643}]
|
[{"Id": 1765643, "DatasetId": 1049526, "DatasourceVersionId": 1802912, "CreatorUserId": 5939391, "LicenseName": "Unknown", "CreationDate": "12/20/2020 21:46:07", "VersionNumber": 1.0, "Title": "Chatbot Dataset Topical Chat", "Slug": "chatbot-dataset-topical-chat", "Subtitle": "Over 8000 conversations", "Description": "This is a Topical Chat dataset from Amazon! It consists of over 8000 conversations and over 184000 messages! \n\nWithin each message, there is: A conversation id, which is basically which conversation the message takes place in. Each message is either the start of a conversation or a reply from the previous message. There is also a sentiment, which represents the emotion that the person who sent the message is feeling. There are 8 sentiments: Angry, Curious to Dive Deeper, Disguised, Fearful, Happy, Sad, and Surprised.\n\nThis dataset can be used in machine learning to simulate a conversation or to make a chatbot. It can also be used for data visualization, for example you could visualize the word usage for the different emotions. \n\nPS: If you cannot download the dataset, download it from here:\nhttps://docs.google.com/spreadsheets/d/1dFdlvgmyXfN3SriVn5Byv_BNtyroICxdgrQKBzuMA1U/edit?usp=sharing\n\nOriginal github dataset:\nhttps://github.com/alexa/Topical-Chat", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1049526, "CreatorUserId": 5939391, "OwnerUserId": 5939391.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1765643.0, "CurrentDatasourceVersionId": 1802912.0, "ForumId": 1066530, "Type": 2, "CreationDate": "12/20/2020 21:46:07", "LastActivityDate": "12/20/2020", "TotalViews": 25728, "TotalDownloads": 2691, "TotalVotes": 36, "TotalKernels": 6}]
|
[{"Id": 5939391, "UserName": "arnavsharmaas", "DisplayName": "Arnav Sharma AS", "RegisterDate": "10/12/2020", "PerformanceTier": 0}]
|
# ## About
# Recent times has seen the invent of various Large Language models (LLMs) like ChatGPT, BARD, LLAMA.
# **Task**
# You are to develop a chat model, where the LLM can be trained with a conversation chat history of 2 users in any format you like. Ideally, the chat conversation should be over 5000 messages long, however, you may export the chat using your personal whatsapp and please train the chat model accordingly. Once the LLM is trained, one person should be able to chat with the LLM chat (representing person 2) with LLM reasonably detecting the style and tone of the conversation and replying back as person 2 (without the involvement of person
# 2).
# ### Data
# #### Importing Libraries and Installing Dependencies
# #### Downloading Topical Chat Data
# https://www.kaggle.com/datasets/arnavsharmaas/chatbot-dataset-topical-chat
import pandas as pd
import nltk
nltk.download("punkt")
filename = pd.read_csv("/kaggle/input/chatbot-dataset-topical-chat/topical_chat.csv")
df = filename.copy()
df.shape
df.head()
# #### Preprocessing Data
# Remove the first two columns, as they contain irrelevant information
df.drop("sentiment", inplace=True, axis=1)
# Merge all the text messages for each conversation into a single string
df1 = (
df.groupby("conversation_id")["message"].apply(lambda x: " ".join(x)).reset_index()
)
df1
# Tokenize the text
df1["tokenized_text"] = df1["message"].apply(nltk.word_tokenize)
df1
# Remove any non-alphabetic characters
df1["tokenized_text"] = df1["tokenized_text"].apply(
lambda x: [word.lower() for word in x if word.isalpha()]
)
df1
# Remove stop words
nltk.download("stopwords")
stopwords = nltk.corpus.stopwords.words("english")
df1["tokenized_text"] = df1["tokenized_text"].apply(
lambda x: [word for word in x if word not in stopwords]
)
df1
# Join the tokens back into a single string
df1["processed_text"] = df1["tokenized_text"].apply(lambda x: " ".join(x))
df1
# Save the preprocessed data to a new CSV file
df1.to_csv("processed_corpus.csv", index=False)
# #### Training the LLM
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load the tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Load the model
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Load the processed dataset
with open("/kaggle/working/processed_corpus.csv", "r", encoding="utf-8") as f:
corpus = f.read()
# Tokenize the dataset
inputs = tokenizer(corpus, return_tensors="pt")
inputs
# Train the model
outputs = model(inputs["input_ids"], labels=inputs["input_ids"])
outputs
# Save the trained model
torch.save(model.state_dict(), "trained_model.pth")
# #### Generating Responses
# Load the trained model
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.load_state_dict(torch.load("trained_model.pth"))
# Set the model to eval mode
model.eval()
# Load the tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Define a function to generate responses
def generate_response(prompt, model, tokenizer):
# Encode the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate the response
output = model.generate(
input_ids=input_ids, max_length=1000, do_sample=True, top_p=0.92, top_k=50
)
# Decode the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
return response
# Example usage
prompt = "Do you like dance?"
response = generate_response(prompt, model, tokenizer)
print(response)
| false | 1 | 1,021 | 0 | 1,307 | 1,021 |
||
129478580
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = sns.load_dataset("titanic")
df.head()
df.shape
# Lets know the entire columns in the data set
df.columns
# This methods help in dividing the data into IQR
df.describe()
# This will help in knowing the data type of all columns(),null_values()
df.info()
# Now lets check the null value in the data
df.isnull().sum()
# Now lets check the entire null values in the data
df.isnull().sum().sum()
# Now lets know the entire how much null percentage value in a "age" colum
null_age = df["age"].isnull().sum() / len(df["age"])
print(null_age) # So there is about 0.19% of null values in "Age data set"
# ## From the above details we can fill the null values with mean()
df.mean()
df1 = df.fillna(df.mean())
df1.isnull().sum()
df1[df1["embark_town"].isnull()]
# This is string object which cannot get replace only neumeric values can get replace
df1["embark_town"].unique()
df1[df1["deck"].isnull()]
# Same case here also
df1["deck"].unique()
df1.head()
# ## Let's start plotting
import matplotlib.pyplot as plt
import seaborn as sns
plt.hist(df1["age"])
plt.title("univariate analysis of age column")
plt.show()
# Now lets perform bivariate analysis
sns.distplot(df1["age"])
plt.title("Lets plot dist plot")
plt.show()
sns.distplot(df1["fare"])
plt.title("lets know the fare flow of fare")
plt.show()
df1["pclass"].value_counts().plot(kind="pie", autopct="%.2f")
df1.head()
# Creating a scatter plot
sns.scatterplot(data=df1, x="survived", y="age", hue=df1["sex"])
# Set plot title
plt.title("Scatter Plot of 'survived' and 'age'")
# Display the plot
plt.show()
sns.heatmap(pd.crosstab(df1["pclass"], df1["survived"]))
# let's plot the pair plot
sns.pairplot(df1)
# ## Now lets look for the outliers in th data
# For looking the outlier in the data we use Box plot we can apply it on only neumerical columns
# Create box plot
sns.boxplot(data=df1, x="sex", y="age", hue="pclass")
# Set plot title
plt.title("Box Plot")
# Display the plot
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/478/129478580.ipynb
| null | null |
[{"Id": 129478580, "ScriptId": 38465125, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12421240, "CreationDate": "05/14/2023 06:54:34", "VersionNumber": 1.0, "Title": "Data Insight", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 100.0, "LinesInsertedFromPrevious": 100.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = sns.load_dataset("titanic")
df.head()
df.shape
# Lets know the entire columns in the data set
df.columns
# This methods help in dividing the data into IQR
df.describe()
# This will help in knowing the data type of all columns(),null_values()
df.info()
# Now lets check the null value in the data
df.isnull().sum()
# Now lets check the entire null values in the data
df.isnull().sum().sum()
# Now lets know the entire how much null percentage value in a "age" colum
null_age = df["age"].isnull().sum() / len(df["age"])
print(null_age) # So there is about 0.19% of null values in "Age data set"
# ## From the above details we can fill the null values with mean()
df.mean()
df1 = df.fillna(df.mean())
df1.isnull().sum()
df1[df1["embark_town"].isnull()]
# This is string object which cannot get replace only neumeric values can get replace
df1["embark_town"].unique()
df1[df1["deck"].isnull()]
# Same case here also
df1["deck"].unique()
df1.head()
# ## Let's start plotting
import matplotlib.pyplot as plt
import seaborn as sns
plt.hist(df1["age"])
plt.title("univariate analysis of age column")
plt.show()
# Now lets perform bivariate analysis
sns.distplot(df1["age"])
plt.title("Lets plot dist plot")
plt.show()
sns.distplot(df1["fare"])
plt.title("lets know the fare flow of fare")
plt.show()
df1["pclass"].value_counts().plot(kind="pie", autopct="%.2f")
df1.head()
# Creating a scatter plot
sns.scatterplot(data=df1, x="survived", y="age", hue=df1["sex"])
# Set plot title
plt.title("Scatter Plot of 'survived' and 'age'")
# Display the plot
plt.show()
sns.heatmap(pd.crosstab(df1["pclass"], df1["survived"]))
# let's plot the pair plot
sns.pairplot(df1)
# ## Now lets look for the outliers in th data
# For looking the outlier in the data we use Box plot we can apply it on only neumerical columns
# Create box plot
sns.boxplot(data=df1, x="sex", y="age", hue="pclass")
# Set plot title
plt.title("Box Plot")
# Display the plot
plt.show()
| false | 0 | 663 | 0 | 663 | 663 |
||
129925282
|
<jupyter_start><jupyter_text>Tutorial2_data
Kaggle dataset identifier: tutorial2-data
<jupyter_script># 1. 提取aoi之外的城市
import geopandas as gpd
import matplotlib.pyplot as plt
# 导入数据
cities = gpd.read_file("../input/tutorial2-data/belgian_cities.shp")
AOI = gpd.read_file("../input/tutorial2-data/area_of_interest_.shp")
# aoi之外城市提取
cities_out_AOI = gpd.overlay(cities, AOI, how="difference")
# 展示
cities_out_AOI.plot(figsize=(10, 10), cmap="winter", column="NAME_4")
# 保存文件
cities_out_AOI.to_file("./outside.shp.shp")
# 2. 提取中心点并生成缓冲区
# 提取每个行政区划的中心点
centroids = cities.centroid
centroids.plot()
# 创建缓冲区,大小为30时并不明显
buffered_centroids = centroids.buffer(30)
buffered_centroids.plot()
# buffer 大小调整为3000
buffered_centroids2 = centroids.buffer(3000)
buffered_centroids2.plot(color="red")
# 3. 保存缓冲区文件
buffered_centroids.to_file("./buffered_centroids.shp")
# 4. 添加mapbox tile 到 folium map 中
import folium
# 创建地图并指定地图中心和缩放级别
basemap = folium.Map(location=[22.352, 113.584], zoom_start=10)
# 添加tile
folium.TileLayer(
tiles="https://api.mapbox.com/styles/v1/wuwei85/clhhm84oy01de01qu5u0thpx8.html?title=view&access_token=pk.eyJ1Ijoid3V3ZWk4NSIsImEiOiJjbGhnOWYzMWUwZ3FmM3VwaW9lNXlpaGF1In0.evIQws1H8cpi2V991RwFUA&zoomwheel=true&fresh=true#15.54/22.350395/113.584952",
name="Tile",
attr="Attribution",
).add_to(basemap)
# 将图例添加到地图中
folium.LayerControl().add_to(basemap)
# 显示地图
basemap
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/925/129925282.ipynb
|
tutorial2-data
|
kyrenchen
|
[{"Id": 129925282, "ScriptId": 38646842, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15147607, "CreationDate": "05/17/2023 13:17:50", "VersionNumber": 1.0, "Title": "Exercise 2 homework", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 57.0, "LinesInsertedFromPrevious": 57.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186347750, "KernelVersionId": 129925282, "SourceDatasetVersionId": 3529467}]
|
[{"Id": 3529467, "DatasetId": 2123013, "DatasourceVersionId": 3582279, "CreatorUserId": 3948686, "LicenseName": "Unknown", "CreationDate": "04/26/2022 04:38:00", "VersionNumber": 2.0, "Title": "Tutorial2_data", "Slug": "tutorial2-data", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2022/04/26", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2123013, "CreatorUserId": 3948686, "OwnerUserId": 3948686.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3529467.0, "CurrentDatasourceVersionId": 3582279.0, "ForumId": 2148571, "Type": 2, "CreationDate": "04/26/2022 03:37:14", "LastActivityDate": "04/26/2022", "TotalViews": 184, "TotalDownloads": 35, "TotalVotes": 0, "TotalKernels": 19}]
|
[{"Id": 3948686, "UserName": "kyrenchen", "DisplayName": "Kyren Chen", "RegisterDate": "10/30/2019", "PerformanceTier": 0}]
|
# 1. 提取aoi之外的城市
import geopandas as gpd
import matplotlib.pyplot as plt
# 导入数据
cities = gpd.read_file("../input/tutorial2-data/belgian_cities.shp")
AOI = gpd.read_file("../input/tutorial2-data/area_of_interest_.shp")
# aoi之外城市提取
cities_out_AOI = gpd.overlay(cities, AOI, how="difference")
# 展示
cities_out_AOI.plot(figsize=(10, 10), cmap="winter", column="NAME_4")
# 保存文件
cities_out_AOI.to_file("./outside.shp.shp")
# 2. 提取中心点并生成缓冲区
# 提取每个行政区划的中心点
centroids = cities.centroid
centroids.plot()
# 创建缓冲区,大小为30时并不明显
buffered_centroids = centroids.buffer(30)
buffered_centroids.plot()
# buffer 大小调整为3000
buffered_centroids2 = centroids.buffer(3000)
buffered_centroids2.plot(color="red")
# 3. 保存缓冲区文件
buffered_centroids.to_file("./buffered_centroids.shp")
# 4. 添加mapbox tile 到 folium map 中
import folium
# 创建地图并指定地图中心和缩放级别
basemap = folium.Map(location=[22.352, 113.584], zoom_start=10)
# 添加tile
folium.TileLayer(
tiles="https://api.mapbox.com/styles/v1/wuwei85/clhhm84oy01de01qu5u0thpx8.html?title=view&access_token=pk.eyJ1Ijoid3V3ZWk4NSIsImEiOiJjbGhnOWYzMWUwZ3FmM3VwaW9lNXlpaGF1In0.evIQws1H8cpi2V991RwFUA&zoomwheel=true&fresh=true#15.54/22.350395/113.584952",
name="Tile",
attr="Attribution",
).add_to(basemap)
# 将图例添加到地图中
folium.LayerControl().add_to(basemap)
# 显示地图
basemap
| false | 0 | 601 | 0 | 622 | 601 |
||
129925421
|
a = 0
b = 1
for i in range(1, 11):
print(a, end="\b")
c = b
b = b + a
a = c
# **Contorl statment change the flow of execution of loops**
# **Break statement**
i = 0
while True:
print(i)
i += 1
if i == 5:
break
# **Find the 20th odd number**
i = 1
counter = 0
while True:
if i % 2 == 1:
counter = (
counter + 1
) # whenever the encounter the odd number. we increase counter
if counter == 20:
print(i)
break
i = i + 1
# **Continue - it skips/reject all the code of the iteration of the lopp and moves the control at the top of the loop**
i = 0
while i <= 5:
i += 1
if i == 3:
continue
print(i)
a = "Data Analytics"
for i in a:
if i == "a":
continue
print(i)
# **NESTED LOOPS**
i = 1
while i <= 4:
j = 1
while j <= 3:
print(j, end=" ")
j += 1
i += 1
print("\n")
i = 1
while i <= 5:
j = 1
while j <= i:
print("*", end=" ")
j += 1
i += 1
print("\n")
# **Prime Numbers**
i = 2
while i <= 20:
j = 2
while j <= i // 2 + 1:
if i == 2:
print(i)
if i % j == 0:
break
if j == i // 2 + 1:
print(i)
j += 1
i += 1
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/925/129925421.ipynb
| null | null |
[{"Id": 129925421, "ScriptId": 38553040, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12418085, "CreationDate": "05/17/2023 13:18:49", "VersionNumber": 1.0, "Title": "Fibonacci series and control statement", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 82.0, "LinesInsertedFromPrevious": 82.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
a = 0
b = 1
for i in range(1, 11):
print(a, end="\b")
c = b
b = b + a
a = c
# **Contorl statment change the flow of execution of loops**
# **Break statement**
i = 0
while True:
print(i)
i += 1
if i == 5:
break
# **Find the 20th odd number**
i = 1
counter = 0
while True:
if i % 2 == 1:
counter = (
counter + 1
) # whenever the encounter the odd number. we increase counter
if counter == 20:
print(i)
break
i = i + 1
# **Continue - it skips/reject all the code of the iteration of the lopp and moves the control at the top of the loop**
i = 0
while i <= 5:
i += 1
if i == 3:
continue
print(i)
a = "Data Analytics"
for i in a:
if i == "a":
continue
print(i)
# **NESTED LOOPS**
i = 1
while i <= 4:
j = 1
while j <= 3:
print(j, end=" ")
j += 1
i += 1
print("\n")
i = 1
while i <= 5:
j = 1
while j <= i:
print("*", end=" ")
j += 1
i += 1
print("\n")
# **Prime Numbers**
i = 2
while i <= 20:
j = 2
while j <= i // 2 + 1:
if i == 2:
print(i)
if i % j == 0:
break
if j == i // 2 + 1:
print(i)
j += 1
i += 1
| false | 0 | 458 | 0 | 458 | 458 |
||
129925489
|
<jupyter_start><jupyter_text>Diamonds
### Context
This classic dataset contains the prices and other attributes of almost 54,000 diamonds. It's a great dataset for beginners learning to work with data analysis and visualization.
### Content
**price** price in US dollars (\$326--\$18,823)
**carat** weight of the diamond (0.2--5.01)
**cut** quality of the cut (Fair, Good, Very Good, Premium, Ideal)
**color** diamond colour, from J (worst) to D (best)
**clarity** a measurement of how clear the diamond is (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best))
**x** length in mm (0--10.74)
**y** width in mm (0--58.9)
**z** depth in mm (0--31.8)
**depth** total depth percentage = z / mean(x, y) = 2 * z / (x + y) (43--79)
**table** width of top of diamond relative to widest point (43--95)
Kaggle dataset identifier: diamonds
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from google.colab import files
uploaded = files.upload()
df = pd.read_csv("diamonds.csv")
df.head()
df.info()
# 3.1
df.describe()
df.describe(include=["O"]) # категориальные признаки
# что есть в дата сете
# 1. Unnamed: 0 - "дубликат" индексирования, начинающий счёт с единицы.
# 2. carat - мера веса, 1 карат = 0.2 грамма
# 3. cut - качество огранки камня
# 4. color - цветовой оттенок алмаза, не являются эквивалентно ценными (тоже упорядочены по качеству)
# 5. clarity - наличие пороков камня
# 6. depth = 2 z / (x + y) 100% - глубина камня
# 7. table - плоскость сверху камня
# 8. price - таргет, цена алмаза
# 9. x - длина камня в миллиметрах
# 10. y - ширина камня в миллиметрах
# 11. z - высота камня в миллиметрах
# 53940 строк и 11 столбцов, 3 категориальных признака, остальные числовые
# price -> price in US dollars ( 326− 18823) -> цена в долларах США
# Категории
pd.Series.unique(df["cut"])
np.sort(pd.Series.unique(df["color"]))
pd.Series.unique(df["clarity"])
spe = df.groupby("cut")["Unnamed: 0"].count()
pie, ax = plt.subplots(figsize=[10, 6])
labels = spe.keys()
plt.pie(x=spe, autopct="%.1f%%", labels=labels, pctdistance=0.8)
plt.title("Cut", fontsize=14)
spe = df.groupby("color")["Unnamed: 0"].count()
pie, ax = plt.subplots(figsize=[10, 6])
labels = spe.keys()
plt.pie(x=spe, autopct="%.1f%%", labels=labels, pctdistance=0.8)
plt.title("Color", fontsize=14)
spe = df.groupby("clarity")["Unnamed: 0"].count()
pie, ax = plt.subplots(figsize=[10, 6])
labels = spe.keys()
plt.pie(x=spe, autopct="%.1f%%", labels=labels, pctdistance=0.8)
plt.title("Clarity", fontsize=14)
# Пропуски
df.isnull().sum()
# Дубликаты
df0 = df.drop(columns=["Unnamed: 0"])
df0
df1 = df0.drop(np.where(df0.duplicated() == True)[0])
df1
# Визуалищируйте некоторые признаки
df1["price"].hist(bins=7)
df1["prices_bins"] = pd.cut(df1["price"], 7, labels=[0, 1, 2, 3, 4, 5, 6])
# 5.2. Категориальные признаки
df1["prices_bins"] = pd.cut(df1["price"], 7, labels=[0, 1, 2, 3, 4, 5, 6])
sns.countplot(data=df1, x="cut", hue="prices_bins")
sns.countplot(data=df1, x="color", hue="prices_bins")
sns.countplot(data=df1, x="clarity", hue="prices_bins")
# 5.3. Количественные признаки
sns.lineplot(data=df1, x="carat", y="price")
# 5.4. Выбросы в разных категориях
sns.boxplot(data=df1, y="carat", x="prices_bins")
sns.boxplot(data=df1, y="table", x="prices_bins")
# 6. Наличие отклонений и аномалий
sns.boxplot(data=df1, y="carat")
df1[df1["carat"] > 4]
sns.boxplot(data=df1, y="depth")
df1[(df1["depth"] < 50) | (df1["depth"] > 75)]
sns.boxplot(data=df1, y="table")
df1[(df1["table"] < 50) | (df1["table"] > 80)]
sns.boxplot(data=df1, y="x")
sns.boxplot(data=df1, y="y")
sns.boxplot(data=df1, y="z")
df1[(df1["x"] <= 0) | (df1["y"] <= 0) | (df1["z"] <= 0)]
df1[(df1["x"] > 10) | (df1["y"] > 10) | (df1["z"] > 10)]
# Удаление отклонений
df2 = df1.drop(df1[df1["carat"] > 4].index)
df2.drop(df2[df2["z"] <= 0].index, inplace=True)
df2.drop(df2[(df2["depth"] < 50) | (df2["depth"] > 75)].index, inplace=True)
df2.drop(df2[(df2["table"] < 50) | (df2["table"] > 80)].index, inplace=True)
df2.drop(df2[(df2["x"] > 10) | (df2["y"] > 10) | (df2["z"] > 10)].index, inplace=True)
df2
# Сравним
df1.describe()
df2.describe()
df1.describe(include=["O"])
df2.describe(include=["O"])
# 7. Кодирование категориальных признаков
from sklearn.preprocessing import LabelEncoder
df3 = df2
df3["color"] = LabelEncoder().fit_transform(df3["color"])
df3.replace(
{"Ideal": 0, "Premium": 1, "Very Good": 2, "Good": 3, "Fair": 4}, inplace=True
)
df3.replace(
{"I1": 0, "SI2": 1, "SI1": 2, "VS2": 3, "VS1": 4, "VVS2": 5, "VVS1": 6, "IF": 7},
inplace=True,
)
df3
# 8. Зависимость в признаках
sns.lineplot(data=df3, x="carat", y="price")
# 9. Тепловая карта
sns.heatmap(df3.corr(), fmt=".2f")
sns.heatmap(df3.corr(), annot=True, fmt=".2f")
# 10. Нормализация
from sklearn.preprocessing import MinMaxScaler
df4 = pd.DataFrame(MinMaxScaler().fit_transform(df3), columns=df3.columns)
df4
# 11. Построение линейной регрессии
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
# Делим на трейн и тест
h_X = df4[["carat", "cut", "color", "clarity", "depth", "table", "x", "y", "z"]]
h_y = df4["price"]
h_X_train, h_X_test, h_y_train, h_y_test = train_test_split(h_X, h_y, test_size=0.3)
# Смотрим что пришло
h_X_train.head()
h_X_train.describe()
h_y_train.head()
h_y_train.describe()
# Смотрим что пришло в датафреймах
h_X_test.head()
h_X_test.describe()
h_y_test.head()
h_y_test.describe()
# Диаграммы рассеяния
fig = plt.figure(figsize=(20, 20))
axes = sns.jointplot(x=h_X_train["carat"], y=h_y_train, kind="reg", ci=95)
plt.show()
# Создаем и обучаем линейную регрессию
line_model = LinearRegression()
line_model.fit(h_X_train, h_y_train)
# Весовые коэффициенты
coeff_df = pd.DataFrame(line_model.coef_, h_X_train.columns, columns=["Coefficient"])
coeff_df
# Предсказание на тесте
h_y_pred = line_model.predict(h_X_test)
# Сравним с реальными значениями
p_df = pd.DataFrame({"Actual": h_y_test, "Predicted": h_y_pred})
p_df
from sklearn import metrics
print("Mean Absolute Error:", metrics.mean_absolute_error(h_y_test, h_y_pred))
print("Mean Squared Error:", metrics.mean_squared_error(h_y_test, h_y_pred))
print(
"Root Mean Squared Error:", np.sqrt(metrics.mean_squared_error(h_y_test, h_y_pred))
)
# Оценка результата с помощью R2
r2_score(h_y_test, h_y_pred)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/925/129925489.ipynb
|
diamonds
|
shivam2503
|
[{"Id": 129925489, "ScriptId": 37401994, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14091366, "CreationDate": "05/17/2023 13:19:20", "VersionNumber": 1.0, "Title": "Linear Regression", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 249.0, "LinesInsertedFromPrevious": 249.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186347976, "KernelVersionId": 129925489, "SourceDatasetVersionId": 2368}]
|
[{"Id": 2368, "DatasetId": 1312, "DatasourceVersionId": 2368, "CreatorUserId": 945829, "LicenseName": "Unknown", "CreationDate": "05/25/2017 03:06:57", "VersionNumber": 1.0, "Title": "Diamonds", "Slug": "diamonds", "Subtitle": "Analyze diamonds by their cut, color, clarity, price, and other attributes", "Description": "### Context \n\nThis classic dataset contains the prices and other attributes of almost 54,000 diamonds. It's a great dataset for beginners learning to work with data analysis and visualization.\n\n### Content\n\n**price** price in US dollars (\\$326--\\$18,823)\n\n**carat** weight of the diamond (0.2--5.01)\n\n**cut** quality of the cut (Fair, Good, Very Good, Premium, Ideal)\n\n**color** diamond colour, from J (worst) to D (best)\n\n**clarity** a measurement of how clear the diamond is (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best))\n\n**x** length in mm (0--10.74)\n\n**y** width in mm (0--58.9)\n\n**z** depth in mm (0--31.8)\n\n**depth** total depth percentage = z / mean(x, y) = 2 * z / (x + y) (43--79)\n\n**table** width of top of diamond relative to widest point (43--95)", "VersionNotes": "Initial release", "TotalCompressedBytes": 3192560.0, "TotalUncompressedBytes": 3192560.0}]
|
[{"Id": 1312, "CreatorUserId": 945829, "OwnerUserId": 945829.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2368.0, "CurrentDatasourceVersionId": 2368.0, "ForumId": 3701, "Type": 2, "CreationDate": "05/25/2017 03:06:57", "LastActivityDate": "02/06/2018", "TotalViews": 434479, "TotalDownloads": 74575, "TotalVotes": 952, "TotalKernels": 444}]
|
[{"Id": 945829, "UserName": "shivam2503", "DisplayName": "Shivam Agrawal", "RegisterDate": "03/07/2017", "PerformanceTier": 1}]
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from google.colab import files
uploaded = files.upload()
df = pd.read_csv("diamonds.csv")
df.head()
df.info()
# 3.1
df.describe()
df.describe(include=["O"]) # категориальные признаки
# что есть в дата сете
# 1. Unnamed: 0 - "дубликат" индексирования, начинающий счёт с единицы.
# 2. carat - мера веса, 1 карат = 0.2 грамма
# 3. cut - качество огранки камня
# 4. color - цветовой оттенок алмаза, не являются эквивалентно ценными (тоже упорядочены по качеству)
# 5. clarity - наличие пороков камня
# 6. depth = 2 z / (x + y) 100% - глубина камня
# 7. table - плоскость сверху камня
# 8. price - таргет, цена алмаза
# 9. x - длина камня в миллиметрах
# 10. y - ширина камня в миллиметрах
# 11. z - высота камня в миллиметрах
# 53940 строк и 11 столбцов, 3 категориальных признака, остальные числовые
# price -> price in US dollars ( 326− 18823) -> цена в долларах США
# Категории
pd.Series.unique(df["cut"])
np.sort(pd.Series.unique(df["color"]))
pd.Series.unique(df["clarity"])
spe = df.groupby("cut")["Unnamed: 0"].count()
pie, ax = plt.subplots(figsize=[10, 6])
labels = spe.keys()
plt.pie(x=spe, autopct="%.1f%%", labels=labels, pctdistance=0.8)
plt.title("Cut", fontsize=14)
spe = df.groupby("color")["Unnamed: 0"].count()
pie, ax = plt.subplots(figsize=[10, 6])
labels = spe.keys()
plt.pie(x=spe, autopct="%.1f%%", labels=labels, pctdistance=0.8)
plt.title("Color", fontsize=14)
spe = df.groupby("clarity")["Unnamed: 0"].count()
pie, ax = plt.subplots(figsize=[10, 6])
labels = spe.keys()
plt.pie(x=spe, autopct="%.1f%%", labels=labels, pctdistance=0.8)
plt.title("Clarity", fontsize=14)
# Пропуски
df.isnull().sum()
# Дубликаты
df0 = df.drop(columns=["Unnamed: 0"])
df0
df1 = df0.drop(np.where(df0.duplicated() == True)[0])
df1
# Визуалищируйте некоторые признаки
df1["price"].hist(bins=7)
df1["prices_bins"] = pd.cut(df1["price"], 7, labels=[0, 1, 2, 3, 4, 5, 6])
# 5.2. Категориальные признаки
df1["prices_bins"] = pd.cut(df1["price"], 7, labels=[0, 1, 2, 3, 4, 5, 6])
sns.countplot(data=df1, x="cut", hue="prices_bins")
sns.countplot(data=df1, x="color", hue="prices_bins")
sns.countplot(data=df1, x="clarity", hue="prices_bins")
# 5.3. Количественные признаки
sns.lineplot(data=df1, x="carat", y="price")
# 5.4. Выбросы в разных категориях
sns.boxplot(data=df1, y="carat", x="prices_bins")
sns.boxplot(data=df1, y="table", x="prices_bins")
# 6. Наличие отклонений и аномалий
sns.boxplot(data=df1, y="carat")
df1[df1["carat"] > 4]
sns.boxplot(data=df1, y="depth")
df1[(df1["depth"] < 50) | (df1["depth"] > 75)]
sns.boxplot(data=df1, y="table")
df1[(df1["table"] < 50) | (df1["table"] > 80)]
sns.boxplot(data=df1, y="x")
sns.boxplot(data=df1, y="y")
sns.boxplot(data=df1, y="z")
df1[(df1["x"] <= 0) | (df1["y"] <= 0) | (df1["z"] <= 0)]
df1[(df1["x"] > 10) | (df1["y"] > 10) | (df1["z"] > 10)]
# Удаление отклонений
df2 = df1.drop(df1[df1["carat"] > 4].index)
df2.drop(df2[df2["z"] <= 0].index, inplace=True)
df2.drop(df2[(df2["depth"] < 50) | (df2["depth"] > 75)].index, inplace=True)
df2.drop(df2[(df2["table"] < 50) | (df2["table"] > 80)].index, inplace=True)
df2.drop(df2[(df2["x"] > 10) | (df2["y"] > 10) | (df2["z"] > 10)].index, inplace=True)
df2
# Сравним
df1.describe()
df2.describe()
df1.describe(include=["O"])
df2.describe(include=["O"])
# 7. Кодирование категориальных признаков
from sklearn.preprocessing import LabelEncoder
df3 = df2
df3["color"] = LabelEncoder().fit_transform(df3["color"])
df3.replace(
{"Ideal": 0, "Premium": 1, "Very Good": 2, "Good": 3, "Fair": 4}, inplace=True
)
df3.replace(
{"I1": 0, "SI2": 1, "SI1": 2, "VS2": 3, "VS1": 4, "VVS2": 5, "VVS1": 6, "IF": 7},
inplace=True,
)
df3
# 8. Зависимость в признаках
sns.lineplot(data=df3, x="carat", y="price")
# 9. Тепловая карта
sns.heatmap(df3.corr(), fmt=".2f")
sns.heatmap(df3.corr(), annot=True, fmt=".2f")
# 10. Нормализация
from sklearn.preprocessing import MinMaxScaler
df4 = pd.DataFrame(MinMaxScaler().fit_transform(df3), columns=df3.columns)
df4
# 11. Построение линейной регрессии
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
# Делим на трейн и тест
h_X = df4[["carat", "cut", "color", "clarity", "depth", "table", "x", "y", "z"]]
h_y = df4["price"]
h_X_train, h_X_test, h_y_train, h_y_test = train_test_split(h_X, h_y, test_size=0.3)
# Смотрим что пришло
h_X_train.head()
h_X_train.describe()
h_y_train.head()
h_y_train.describe()
# Смотрим что пришло в датафреймах
h_X_test.head()
h_X_test.describe()
h_y_test.head()
h_y_test.describe()
# Диаграммы рассеяния
fig = plt.figure(figsize=(20, 20))
axes = sns.jointplot(x=h_X_train["carat"], y=h_y_train, kind="reg", ci=95)
plt.show()
# Создаем и обучаем линейную регрессию
line_model = LinearRegression()
line_model.fit(h_X_train, h_y_train)
# Весовые коэффициенты
coeff_df = pd.DataFrame(line_model.coef_, h_X_train.columns, columns=["Coefficient"])
coeff_df
# Предсказание на тесте
h_y_pred = line_model.predict(h_X_test)
# Сравним с реальными значениями
p_df = pd.DataFrame({"Actual": h_y_test, "Predicted": h_y_pred})
p_df
from sklearn import metrics
print("Mean Absolute Error:", metrics.mean_absolute_error(h_y_test, h_y_pred))
print("Mean Squared Error:", metrics.mean_squared_error(h_y_test, h_y_pred))
print(
"Root Mean Squared Error:", np.sqrt(metrics.mean_squared_error(h_y_test, h_y_pred))
)
# Оценка результата с помощью R2
r2_score(h_y_test, h_y_pred)
| false | 0 | 2,639 | 0 | 2,948 | 2,639 |
||
129925031
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/crimedata/crimes.csv", sep=";")
df.head()
sns.set(style="whitegrid")
sns.boxplot(data=df)
df[df["Viol"] > 2300]
# # Realtion entre les variables
sns.pairplot(data=df)
# # Correlation entre les variables
df.corr()
sns.heatmap(df.corr(), annot=True)
# # Independent Variables and Labels
X = df.iloc[:, 1:8].values
y = df.iloc[:, 0].values
print(y)
# # Principal Component
# 
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_scaled = sc.fit_transform(X)
pca = PCA(n_components=None)
X_pca = pca.fit_transform(X_scaled)
print(pca.explained_variance_ratio_)
print(sum(pca.explained_variance_ratio_[:2]))
X_pca.shape
X_pca[0, :]
plt.scatter(x=X_pca[:, 0], y=X_pca[:, 1])
for label, i, j in zip(y, X_pca[:, 0], X_pca[:, 1]):
plt.annotate(label, xy=(i, j))
plt.xlabel("PCA1")
plt.ylabel("PCA2")
plt.show()
# # KMeans clustring
# - il va creer k centroides au hasard ala la premiere iteration
# - apres il vas calcluer les distances
# - il vas rechercher des centroides tout depends des points ,
# - il vas recalculer les distances et faire des mis a jours sur les points
# - il va faire cet iteration jusqua ik nya pas des changement des classes de points
# 
from sklearn.cluster import KMeans
wcss = []
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, init="k-means++", random_state=0).fit(X_scaled)
wcss.append(kmeans.inertia_)
wcss
plt.figure(figsize=(12, 5))
plt.plot(range(1, 11), wcss, "bx-")
plt.title("Elbow Method")
plt.xlabel("Number of cluster")
plt.ylabel("WCSS")
plt.show()
y_kmeans = kmeans.labels_
y_kmeans
df.head()
df.describe()
kmeans = KMeans(n_clusters=4, random_state=0)
kmeans.fit(X_scaled)
y_kmeans = kmeans.labels_
y_kmeans
df.head()
kmeans_x = KMeans(n_clusters=4, random_state=0).fit(X)
y_kmeans_x = kmeans_x.labels_
y_kmeans_x
kmeans_pca = KMeans(n_clusters=4, random_state=0)
kmeans_pca.fit(X_pca)
y_kmeans_pca = kmeans_pca.labels_
y_kmeans_pca
# # Visualisation
fig, axes = plt.subplots(1, 3, figsize=(16, 8))
axes[0].set_title("KMeans with X")
axes[0].scatter(
X_pca[y_kmeans_x == 0, 0], X_pca[y_kmeans_x == 0, 1], c="red", label="Cluster 0"
)
axes[0].scatter(
X_pca[y_kmeans_x == 1, 0], X_pca[y_kmeans_x == 1, 1], c="blue", label="Cluster 1"
)
axes[0].scatter(
X_pca[y_kmeans_x == 2, 0], X_pca[y_kmeans_x == 2, 1], c="green", label="Cluster 2"
)
axes[0].scatter(
X_pca[y_kmeans_x == 3, 0], X_pca[y_kmeans_x == 3, 1], c="yellow", label="Cluster 3"
)
for label, i, j in zip(labels, X_pca[:, 0], X_pca[:, 1]):
axes[0].annotate(label, xy=(i, j))
axes[0].set_xlabel("PCA1")
axes[0].set_xlabel("PCA2")
axes[0].legend()
axes[1].set_title("KMeans with X_Sclaed")
axes[1].scatter(
X_pca[y_kmeans == 0, 0], X_pca[y_kmeans == 0, 1], c="red", label="Cluster 0"
)
axes[1].scatter(
X_pca[y_kmeans == 1, 0], X_pca[y_kmeans == 1, 1], c="blue", label="Cluster 1"
)
axes[1].scatter(
X_pca[y_kmeans == 2, 0], X_pca[y_kmeans == 2, 1], c="green", label="Cluster 2"
)
axes[1].scatter(
X_pca[y_kmeans == 3, 0], X_pca[y_kmeans == 3, 1], c="yellow", label="Cluster 3"
)
for label, i, j in zip(labels, X_pca[:, 0], X_pca[:, 1]):
axes[1].annotate(label, xy=(i, j))
axes[1].set_xlabel("PCA1")
axes[1].set_xlabel("PCA2")
axes[1].legend()
plt.show()
plt.title("KMeans with X_pca")
plt.scatter(
X_pca[y_kmeans_pca == 0, 0], X_pca[y_kmeans_pca == 0, 1], c="red", label="Cluster 0"
)
plt.scatter(
X_pca[y_kmeans_pca == 1, 0],
X_pca[y_kmeans_pca == 1, 1],
c="blue",
label="Cluster 1",
)
plt.scatter(
X_pca[y_kmeans_pca == 2, 0],
X_pca[y_kmeans_pca == 2, 1],
c="green",
label="Cluster 2",
)
plt.scatter(
X_pca[y_kmeans_pca == 3, 0],
X_pca[y_kmeans_pca == 3, 1],
c="yellow",
label="Cluster 3",
)
for label, i, j in zip(labels, X_pca[:, 0], X_pca[:, 1]):
axes[2].annotate(label, xy=(i, j))
plt.xlabel("PCA1")
plt.xlabel("PCA2")
plt.legend()
plt.show()
df["cluster"] = y_kmeans
df.head()
xc = df.drop(["cluster", "Etat "], axis=1)
yc = df["cluster"]
import graphviz
import tree
from sklearn.tree import DecisionTreeClassifier, export_graphviz
dt = DecisionTreeClassifier(max_depth=1000, random_state=0)
from sklearn.model_selection import train_test_split
xc_train, xc_test, yc_train, yc_test = train_test_split(xc, yc, test_size=0.3)
dt.fit(xc_train, yc_train)
y_predict = dt.predict(xc_test)
from six import StringIO
import pydot
dot_data = StringIO()
export_graphviz(dt, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_png("graph.png")
plt.imshow("graph.png")
g = export_graphviz(dt, feature_names=xc.columns, out_file="tree.dot", label="all")
from PIL import Image
Image(graph)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/925/129925031.ipynb
| null | null |
[{"Id": 129925031, "ScriptId": 32799000, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10803721, "CreationDate": "05/17/2023 13:16:02", "VersionNumber": 1.0, "Title": "Crimes Classification ML", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 209.0, "LinesInsertedFromPrevious": 209.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/crimedata/crimes.csv", sep=";")
df.head()
sns.set(style="whitegrid")
sns.boxplot(data=df)
df[df["Viol"] > 2300]
# # Realtion entre les variables
sns.pairplot(data=df)
# # Correlation entre les variables
df.corr()
sns.heatmap(df.corr(), annot=True)
# # Independent Variables and Labels
X = df.iloc[:, 1:8].values
y = df.iloc[:, 0].values
print(y)
# # Principal Component
# 
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_scaled = sc.fit_transform(X)
pca = PCA(n_components=None)
X_pca = pca.fit_transform(X_scaled)
print(pca.explained_variance_ratio_)
print(sum(pca.explained_variance_ratio_[:2]))
X_pca.shape
X_pca[0, :]
plt.scatter(x=X_pca[:, 0], y=X_pca[:, 1])
for label, i, j in zip(y, X_pca[:, 0], X_pca[:, 1]):
plt.annotate(label, xy=(i, j))
plt.xlabel("PCA1")
plt.ylabel("PCA2")
plt.show()
# # KMeans clustring
# - il va creer k centroides au hasard ala la premiere iteration
# - apres il vas calcluer les distances
# - il vas rechercher des centroides tout depends des points ,
# - il vas recalculer les distances et faire des mis a jours sur les points
# - il va faire cet iteration jusqua ik nya pas des changement des classes de points
# 
from sklearn.cluster import KMeans
wcss = []
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, init="k-means++", random_state=0).fit(X_scaled)
wcss.append(kmeans.inertia_)
wcss
plt.figure(figsize=(12, 5))
plt.plot(range(1, 11), wcss, "bx-")
plt.title("Elbow Method")
plt.xlabel("Number of cluster")
plt.ylabel("WCSS")
plt.show()
y_kmeans = kmeans.labels_
y_kmeans
df.head()
df.describe()
kmeans = KMeans(n_clusters=4, random_state=0)
kmeans.fit(X_scaled)
y_kmeans = kmeans.labels_
y_kmeans
df.head()
kmeans_x = KMeans(n_clusters=4, random_state=0).fit(X)
y_kmeans_x = kmeans_x.labels_
y_kmeans_x
kmeans_pca = KMeans(n_clusters=4, random_state=0)
kmeans_pca.fit(X_pca)
y_kmeans_pca = kmeans_pca.labels_
y_kmeans_pca
# # Visualisation
fig, axes = plt.subplots(1, 3, figsize=(16, 8))
axes[0].set_title("KMeans with X")
axes[0].scatter(
X_pca[y_kmeans_x == 0, 0], X_pca[y_kmeans_x == 0, 1], c="red", label="Cluster 0"
)
axes[0].scatter(
X_pca[y_kmeans_x == 1, 0], X_pca[y_kmeans_x == 1, 1], c="blue", label="Cluster 1"
)
axes[0].scatter(
X_pca[y_kmeans_x == 2, 0], X_pca[y_kmeans_x == 2, 1], c="green", label="Cluster 2"
)
axes[0].scatter(
X_pca[y_kmeans_x == 3, 0], X_pca[y_kmeans_x == 3, 1], c="yellow", label="Cluster 3"
)
for label, i, j in zip(labels, X_pca[:, 0], X_pca[:, 1]):
axes[0].annotate(label, xy=(i, j))
axes[0].set_xlabel("PCA1")
axes[0].set_xlabel("PCA2")
axes[0].legend()
axes[1].set_title("KMeans with X_Sclaed")
axes[1].scatter(
X_pca[y_kmeans == 0, 0], X_pca[y_kmeans == 0, 1], c="red", label="Cluster 0"
)
axes[1].scatter(
X_pca[y_kmeans == 1, 0], X_pca[y_kmeans == 1, 1], c="blue", label="Cluster 1"
)
axes[1].scatter(
X_pca[y_kmeans == 2, 0], X_pca[y_kmeans == 2, 1], c="green", label="Cluster 2"
)
axes[1].scatter(
X_pca[y_kmeans == 3, 0], X_pca[y_kmeans == 3, 1], c="yellow", label="Cluster 3"
)
for label, i, j in zip(labels, X_pca[:, 0], X_pca[:, 1]):
axes[1].annotate(label, xy=(i, j))
axes[1].set_xlabel("PCA1")
axes[1].set_xlabel("PCA2")
axes[1].legend()
plt.show()
plt.title("KMeans with X_pca")
plt.scatter(
X_pca[y_kmeans_pca == 0, 0], X_pca[y_kmeans_pca == 0, 1], c="red", label="Cluster 0"
)
plt.scatter(
X_pca[y_kmeans_pca == 1, 0],
X_pca[y_kmeans_pca == 1, 1],
c="blue",
label="Cluster 1",
)
plt.scatter(
X_pca[y_kmeans_pca == 2, 0],
X_pca[y_kmeans_pca == 2, 1],
c="green",
label="Cluster 2",
)
plt.scatter(
X_pca[y_kmeans_pca == 3, 0],
X_pca[y_kmeans_pca == 3, 1],
c="yellow",
label="Cluster 3",
)
for label, i, j in zip(labels, X_pca[:, 0], X_pca[:, 1]):
axes[2].annotate(label, xy=(i, j))
plt.xlabel("PCA1")
plt.xlabel("PCA2")
plt.legend()
plt.show()
df["cluster"] = y_kmeans
df.head()
xc = df.drop(["cluster", "Etat "], axis=1)
yc = df["cluster"]
import graphviz
import tree
from sklearn.tree import DecisionTreeClassifier, export_graphviz
dt = DecisionTreeClassifier(max_depth=1000, random_state=0)
from sklearn.model_selection import train_test_split
xc_train, xc_test, yc_train, yc_test = train_test_split(xc, yc, test_size=0.3)
dt.fit(xc_train, yc_train)
y_predict = dt.predict(xc_test)
from six import StringIO
import pydot
dot_data = StringIO()
export_graphviz(dt, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_png("graph.png")
plt.imshow("graph.png")
g = export_graphviz(dt, feature_names=xc.columns, out_file="tree.dot", label="all")
from PIL import Image
Image(graph)
| false | 0 | 2,250 | 0 | 2,250 | 2,250 |
||
129925202
|
a = "Data Analytics"
print(a)
# **String Indexing**
a[13]
len(a)
a[-3]
# **String Silicing**
a = "Python"
print(a)
a[0], a[1], a[-1]
a[:4]
a[start:end:step]
a[0:5:2]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/925/129925202.ipynb
| null | null |
[{"Id": 129925202, "ScriptId": 38647693, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12418085, "CreationDate": "05/17/2023 13:17:15", "VersionNumber": 1.0, "Title": "String", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 23.0, "LinesInsertedFromPrevious": 23.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
a = "Data Analytics"
print(a)
# **String Indexing**
a[13]
len(a)
a[-3]
# **String Silicing**
a = "Python"
print(a)
a[0], a[1], a[-1]
a[:4]
a[start:end:step]
a[0:5:2]
| false | 0 | 91 | 0 | 91 | 91 |
||
129925638
|
<jupyter_start><jupyter_text>Transformer
Kaggle dataset identifier: transformer
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
ds_A = pd.read_csv("/kaggle/input/transformer/DatasetA.csv")
ds_B = pd.read_csv("/kaggle/input/transformer/DatasetB.csv")
# Splitting train and test
from sklearn.model_selection import train_test_split
train_set_A, test_set_A = train_test_split(ds_A, test_size=0.25, random_state=11)
# Setting the labels
y_train_A = train_set_A["Furan"]
y_test_A = test_set_A["Furan"]
# Dropping the Furan and Health Index columns
X_train_A = train_set_A.drop(["Furan", "HI"], axis=1)
X_test_A = test_set_A.drop(["Furan", "HI"], axis=1)
# For DatasetB
y_B = ds_B["Furan"]
X_B = ds_B.drop(["Furan", "HI"], axis=1)
X_train_A = X_train_A.drop(set(ds_A.columns) - set(ds_B.columns), axis=1)
X_test_A = X_test_A.drop(set(ds_A.columns) - set(ds_B.columns), axis=1)
X_B = X_B[X_train_A.columns]
X_train_A
# define the bin edges for each class
bins = [-1, 0.1, 1, 100]
# define the labels for each class
labels = [0, 1, 2]
y_train_A = pd.DataFrame(y_train_A)
y_B = pd.DataFrame(y_B)
y_test_A = pd.DataFrame(y_test_A)
# discretize the data into the desired classes
y_train_A["Class"] = pd.cut(y_train_A["Furan"], bins=bins, labels=labels)
y_B["Class"] = pd.cut(y_B["Furan"], bins=bins, labels=labels)
y_test_A["Class"] = pd.cut(y_test_A["Furan"], bins=bins, labels=labels)
y_train_A = np.array(y_train_A.drop("Furan", axis=1)).ravel()
y_B = np.array(y_B.drop("Furan", axis=1)).ravel()
y_test_A = np.array(y_test_A.drop("Furan", axis=1)).ravel()
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
# from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
# log_clf = LogisticRegression(max_iter=1000)
svm_clf = SVC(probability=True, gamma=0.001)
knn_clf = KNeighborsClassifier(n_neighbors=3)
xgb_clf = XGBClassifier(
learning_rate=0.01, n_estimators=300, max_depth=3, subsample=0.7
)
mlp_clf = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000)
nb_clf = GaussianNB()
ada_clf = AdaBoostClassifier(n_estimators=50, learning_rate=0.003)
voting_clf = VotingClassifier(
estimators=[
("nn", mlp_clf),
("svc", svm_clf),
("knn", knn_clf), # ('ada', ada_clf),
("xgb", xgb_clf),
("nb", nb_clf),
],
voting="hard",
)
voting_clf.fit(X_train_A, np.array(y_train_A).ravel())
from sklearn.metrics import accuracy_score
for clf in (mlp_clf, svm_clf, knn_clf, xgb_clf, nb_clf, voting_clf): # ada_clf,
clf.fit(X_train_A, y_train_A)
y_pred_A = clf.predict(X_test_A)
y_pred_B = clf.predict(X_B)
print(
clf.__class__.__name__ + " for dataset A:", accuracy_score(y_test_A, y_pred_A)
)
print(clf.__class__.__name__ + " for dataset B:", accuracy_score(y_B, y_pred_B))
# # Training using all of the data from Dataset A
X_A = ds_A.drop(["Furan", "HI"], axis=1)
X_A = X_A.drop(set(ds_A.columns) - set(ds_B.columns), axis=1)
y_A = ds_A["Furan"]
y_A = np.array(y_A).ravel().astype("int64")
X_A
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
lr_clf = LogisticRegression(max_iter=10000)
svm_clf = SVC(probability=True)
knn_clf = KNeighborsClassifier()
xgb_clf = XGBClassifier(
learning_rate=0.01, n_estimators=300, max_depth=3, subsample=0.7
)
mlp_clf = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000)
nb_clf = GaussianNB()
# ada_clf = AdaBoostClassifier(n_estimators=50, learning_rate=0.003)
voting_clf = VotingClassifier(
estimators=[ # ('nn', mlp_clf),
("lr", lr_clf),
# ('svc', svm_clf),
("knn", knn_clf), # ('ada', ada_clf),
("xgb", xgb_clf), # ('nb', nb_clf)
],
voting="hard",
)
voting_clf.fit(X_A, y_A)
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
for clf in ( # mlp_clf, svm_clf, #ada_clf,
knn_clf,
xgb_clf,
lr_clf, # nb_clf,
voting_clf,
):
label_encoder = LabelEncoder()
y_A_encoded = label_encoder.fit_transform(y_A)
clf.fit(X_A, y_A_encoded)
y_pred_B = clf.predict(X_B)
print(clf.__class__.__name__ + " for dataset B:", accuracy_score(y_B, y_pred_B))
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(
cm, classes, normalize=False, cmap=plt.cm.Blues, title="Confusion matrix"
):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix, without normalization")
print(cm)
plt.imshow(cm, interpolation="nearest", cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = ".2f" if normalize else "d"
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j,
i,
format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black",
)
plt.tight_layout()
plt.ylabel("True label")
plt.xlabel("Predicted label")
# Example usage
class_names = ["A", "B", "C"]
for clf in ( # mlp_clf, svm_clf, #ada_clf,
knn_clf,
xgb_clf,
lr_clf, # nb_clf,
voting_clf,
):
label_encoder = LabelEncoder()
y_A_encoded = label_encoder.fit_transform(y_A)
clf.fit(X_A, y_A_encoded)
y_pred_B = clf.predict(X_B)
cnf_matrix = confusion_matrix(y_B, y_pred_B)
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(
cnf_matrix,
classes=class_names,
title="Confusion matrix for " + clf.__class__.__name__,
)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/925/129925638.ipynb
|
transformer
|
darvack
|
[{"Id": 129925638, "ScriptId": 38514219, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5534641, "CreationDate": "05/17/2023 13:20:32", "VersionNumber": 7.0, "Title": "Transformer-Paper", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 197.0, "LinesInsertedFromPrevious": 107.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 90.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 186348266, "KernelVersionId": 129925638, "SourceDatasetVersionId": 5683446}]
|
[{"Id": 5683446, "DatasetId": 3267447, "DatasourceVersionId": 5759014, "CreatorUserId": 5534641, "LicenseName": "Unknown", "CreationDate": "05/14/2023 14:35:55", "VersionNumber": 1.0, "Title": "Transformer", "Slug": "transformer", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3267447, "CreatorUserId": 5534641, "OwnerUserId": 5534641.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5683446.0, "CurrentDatasourceVersionId": 5759014.0, "ForumId": 3333063, "Type": 2, "CreationDate": "05/14/2023 14:35:55", "LastActivityDate": "05/14/2023", "TotalViews": 45, "TotalDownloads": 0, "TotalVotes": 0, "TotalKernels": 5}]
|
[{"Id": 5534641, "UserName": "darvack", "DisplayName": "Mohammad Amin Faraji", "RegisterDate": "07/27/2020", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
ds_A = pd.read_csv("/kaggle/input/transformer/DatasetA.csv")
ds_B = pd.read_csv("/kaggle/input/transformer/DatasetB.csv")
# Splitting train and test
from sklearn.model_selection import train_test_split
train_set_A, test_set_A = train_test_split(ds_A, test_size=0.25, random_state=11)
# Setting the labels
y_train_A = train_set_A["Furan"]
y_test_A = test_set_A["Furan"]
# Dropping the Furan and Health Index columns
X_train_A = train_set_A.drop(["Furan", "HI"], axis=1)
X_test_A = test_set_A.drop(["Furan", "HI"], axis=1)
# For DatasetB
y_B = ds_B["Furan"]
X_B = ds_B.drop(["Furan", "HI"], axis=1)
X_train_A = X_train_A.drop(set(ds_A.columns) - set(ds_B.columns), axis=1)
X_test_A = X_test_A.drop(set(ds_A.columns) - set(ds_B.columns), axis=1)
X_B = X_B[X_train_A.columns]
X_train_A
# define the bin edges for each class
bins = [-1, 0.1, 1, 100]
# define the labels for each class
labels = [0, 1, 2]
y_train_A = pd.DataFrame(y_train_A)
y_B = pd.DataFrame(y_B)
y_test_A = pd.DataFrame(y_test_A)
# discretize the data into the desired classes
y_train_A["Class"] = pd.cut(y_train_A["Furan"], bins=bins, labels=labels)
y_B["Class"] = pd.cut(y_B["Furan"], bins=bins, labels=labels)
y_test_A["Class"] = pd.cut(y_test_A["Furan"], bins=bins, labels=labels)
y_train_A = np.array(y_train_A.drop("Furan", axis=1)).ravel()
y_B = np.array(y_B.drop("Furan", axis=1)).ravel()
y_test_A = np.array(y_test_A.drop("Furan", axis=1)).ravel()
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
# from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
# log_clf = LogisticRegression(max_iter=1000)
svm_clf = SVC(probability=True, gamma=0.001)
knn_clf = KNeighborsClassifier(n_neighbors=3)
xgb_clf = XGBClassifier(
learning_rate=0.01, n_estimators=300, max_depth=3, subsample=0.7
)
mlp_clf = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000)
nb_clf = GaussianNB()
ada_clf = AdaBoostClassifier(n_estimators=50, learning_rate=0.003)
voting_clf = VotingClassifier(
estimators=[
("nn", mlp_clf),
("svc", svm_clf),
("knn", knn_clf), # ('ada', ada_clf),
("xgb", xgb_clf),
("nb", nb_clf),
],
voting="hard",
)
voting_clf.fit(X_train_A, np.array(y_train_A).ravel())
from sklearn.metrics import accuracy_score
for clf in (mlp_clf, svm_clf, knn_clf, xgb_clf, nb_clf, voting_clf): # ada_clf,
clf.fit(X_train_A, y_train_A)
y_pred_A = clf.predict(X_test_A)
y_pred_B = clf.predict(X_B)
print(
clf.__class__.__name__ + " for dataset A:", accuracy_score(y_test_A, y_pred_A)
)
print(clf.__class__.__name__ + " for dataset B:", accuracy_score(y_B, y_pred_B))
# # Training using all of the data from Dataset A
X_A = ds_A.drop(["Furan", "HI"], axis=1)
X_A = X_A.drop(set(ds_A.columns) - set(ds_B.columns), axis=1)
y_A = ds_A["Furan"]
y_A = np.array(y_A).ravel().astype("int64")
X_A
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
lr_clf = LogisticRegression(max_iter=10000)
svm_clf = SVC(probability=True)
knn_clf = KNeighborsClassifier()
xgb_clf = XGBClassifier(
learning_rate=0.01, n_estimators=300, max_depth=3, subsample=0.7
)
mlp_clf = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000)
nb_clf = GaussianNB()
# ada_clf = AdaBoostClassifier(n_estimators=50, learning_rate=0.003)
voting_clf = VotingClassifier(
estimators=[ # ('nn', mlp_clf),
("lr", lr_clf),
# ('svc', svm_clf),
("knn", knn_clf), # ('ada', ada_clf),
("xgb", xgb_clf), # ('nb', nb_clf)
],
voting="hard",
)
voting_clf.fit(X_A, y_A)
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
for clf in ( # mlp_clf, svm_clf, #ada_clf,
knn_clf,
xgb_clf,
lr_clf, # nb_clf,
voting_clf,
):
label_encoder = LabelEncoder()
y_A_encoded = label_encoder.fit_transform(y_A)
clf.fit(X_A, y_A_encoded)
y_pred_B = clf.predict(X_B)
print(clf.__class__.__name__ + " for dataset B:", accuracy_score(y_B, y_pred_B))
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(
cm, classes, normalize=False, cmap=plt.cm.Blues, title="Confusion matrix"
):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix, without normalization")
print(cm)
plt.imshow(cm, interpolation="nearest", cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = ".2f" if normalize else "d"
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j,
i,
format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black",
)
plt.tight_layout()
plt.ylabel("True label")
plt.xlabel("Predicted label")
# Example usage
class_names = ["A", "B", "C"]
for clf in ( # mlp_clf, svm_clf, #ada_clf,
knn_clf,
xgb_clf,
lr_clf, # nb_clf,
voting_clf,
):
label_encoder = LabelEncoder()
y_A_encoded = label_encoder.fit_transform(y_A)
clf.fit(X_A, y_A_encoded)
y_pred_B = clf.predict(X_B)
cnf_matrix = confusion_matrix(y_B, y_pred_B)
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(
cnf_matrix,
classes=class_names,
title="Confusion matrix for " + clf.__class__.__name__,
)
plt.show()
| false | 2 | 2,432 | 1 | 2,448 | 2,432 |
||
129952320
|
<jupyter_start><jupyter_text>Anime Recommendations Database
# Context
This data set contains information on user preference data from 73,516 users on 12,294 anime. Each user is able to add anime to their completed list and give it a rating and this data set is a compilation of those ratings.
# Content
Anime.csv
- anime_id - myanimelist.net's unique id identifying an anime.
- name - full name of anime.
- genre - comma separated list of genres for this anime.
- type - movie, TV, OVA, etc.
- episodes - how many episodes in this show. (1 if movie).
- rating - average rating out of 10 for this anime.
- members - number of community members that are in this anime's
"group".
Rating.csv
- user_id - non identifiable randomly generated user id.
- anime_id - the anime that this user has rated.
- rating - rating out of 10 this user has assigned (-1 if the user watched it but didn't assign a rating).
# Acknowledgements
Thanks to myanimelist.net API for providing anime data and user ratings.
# Inspiration
Building a better anime recommendation system based only on user viewing history.
Kaggle dataset identifier: anime-recommendations-database
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df2 = pd.read_csv("/kaggle/input/anime-recommendations-database/anime.csv")
df2
# # Top 10 anime rating
Top_10_anime = df2[["name", "rating"]]
Top_10_anime = Top_10_anime.sort_values(["rating"], ascending=False)
Top_10_anime.head(10)
# # Top 5 anime genre
top_5_anime_genre = df2[["name", "genre"]]
top_5_anime_genre.head(5)
# # Top 10 anime members
Top_10_anime_members = df2[["name", "members"]]
Top_10_anime_members = Top_10_anime_members.sort_values(["members"], ascending=False)
Top_10_anime_members.head(10)
# # Top 5 anime episodes
Top_5_anime_with_episodes = df2[["name", "episodes"]]
Top_5_anime_with_episodes.head(5)
# # Verbalize your insights in Markdown cells.
# ## Anime Recommendations Database
# This data set contains information on user preference data from 73,516 users on 12,294 anime. Each user is able to add anime to their completed list and give it a rating and this data set is a compilation of those ratings.
# ## Content
# Anime.csv
# anime_id - myanimelist.net's unique id identifying an anime.
# name - full name of anime.
# genre - comma separated list of genres for this anime.
# type - movie, TV, OVA, etc.
# episodes - how many episodes in this show. (1 if movie).
# rating - average rating out of 10 for this anime.
# members - number of community members that are in this anime's
# "group".
# Rating.csv
# user_id - non identifiable randomly generated user id.
# anime_id - the anime that this user has rated.
# rating - rating out of 10 this user has assigned (-1 if the user watched it but didn't assign a rating).
# ## Acknowledgements
# Thanks to myanimelist.net API for providing anime data and user ratings.
import seaborn as sns
import matplotlib.pyplot as plt
# top_5_anime_rating
# plt.figure(figsize=(20, 10))
# sns.barplot(x="rating", y="name", data=Top_10_anime)
# # visualization of the top 5 anime members
Top_10_anime_members = Top_10_anime_members.head(5)
plt.figure()
sns.barplot(x="members", y="name", data=Top_10_anime_members, palette="viridis")
plt.title("top 5 anime members")
plt.xlabel("members")
plt.ylabel("name")
plt.show()
# # visualization of the relation between episodes and rating
plt.figure(figsize=(20, 10))
sns.lineplot(x="episodes", y="rating", data=df2.head(20), color="red")
plt.title("the relation between episodes and rating")
plt.xlabel("episodes")
plt.ylabel("rating")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/952/129952320.ipynb
|
anime-recommendations-database
| null |
[{"Id": 129952320, "ScriptId": 38634812, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14982285, "CreationDate": "05/17/2023 16:50:11", "VersionNumber": 1.0, "Title": "Anime", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 96.0, "LinesInsertedFromPrevious": 96.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186385122, "KernelVersionId": 129952320, "SourceDatasetVersionId": 1094}]
|
[{"Id": 1094, "DatasetId": 571, "DatasourceVersionId": 1094, "CreatorUserId": 415376, "LicenseName": "CC0: Public Domain", "CreationDate": "12/21/2016 04:58:34", "VersionNumber": 1.0, "Title": "Anime Recommendations Database", "Slug": "anime-recommendations-database", "Subtitle": "Recommendation data from 76,000 users at myanimelist.net", "Description": "# Context \n\nThis data set contains information on user preference data from 73,516 users on 12,294 anime. Each user is able to add anime to their completed list and give it a rating and this data set is a compilation of those ratings.\n\n# Content\n\nAnime.csv\n\n - anime_id - myanimelist.net's unique id identifying an anime.\n - name - full name of anime.\n - genre - comma separated list of genres for this anime.\n - type - movie, TV, OVA, etc.\n - episodes - how many episodes in this show. (1 if movie).\n - rating - average rating out of 10 for this anime.\n - members - number of community members that are in this anime's\n \"group\".\n\nRating.csv\n\n - user_id - non identifiable randomly generated user id.\n - anime_id - the anime that this user has rated.\n - rating - rating out of 10 this user has assigned (-1 if the user watched it but didn't assign a rating).\n\n# Acknowledgements\n\nThanks to myanimelist.net API for providing anime data and user ratings.\n\n# Inspiration\n\nBuilding a better anime recommendation system based only on user viewing history.", "VersionNotes": "Initial release", "TotalCompressedBytes": 112341362.0, "TotalUncompressedBytes": 112341362.0}]
|
[{"Id": 571, "CreatorUserId": 415376, "OwnerUserId": NaN, "OwnerOrganizationId": 137.0, "CurrentDatasetVersionId": 1094.0, "CurrentDatasourceVersionId": 1094.0, "ForumId": 2245, "Type": 2, "CreationDate": "12/21/2016 04:58:34", "LastActivityDate": "02/05/2018", "TotalViews": 377851, "TotalDownloads": 49859, "TotalVotes": 1202, "TotalKernels": 289}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df2 = pd.read_csv("/kaggle/input/anime-recommendations-database/anime.csv")
df2
# # Top 10 anime rating
Top_10_anime = df2[["name", "rating"]]
Top_10_anime = Top_10_anime.sort_values(["rating"], ascending=False)
Top_10_anime.head(10)
# # Top 5 anime genre
top_5_anime_genre = df2[["name", "genre"]]
top_5_anime_genre.head(5)
# # Top 10 anime members
Top_10_anime_members = df2[["name", "members"]]
Top_10_anime_members = Top_10_anime_members.sort_values(["members"], ascending=False)
Top_10_anime_members.head(10)
# # Top 5 anime episodes
Top_5_anime_with_episodes = df2[["name", "episodes"]]
Top_5_anime_with_episodes.head(5)
# # Verbalize your insights in Markdown cells.
# ## Anime Recommendations Database
# This data set contains information on user preference data from 73,516 users on 12,294 anime. Each user is able to add anime to their completed list and give it a rating and this data set is a compilation of those ratings.
# ## Content
# Anime.csv
# anime_id - myanimelist.net's unique id identifying an anime.
# name - full name of anime.
# genre - comma separated list of genres for this anime.
# type - movie, TV, OVA, etc.
# episodes - how many episodes in this show. (1 if movie).
# rating - average rating out of 10 for this anime.
# members - number of community members that are in this anime's
# "group".
# Rating.csv
# user_id - non identifiable randomly generated user id.
# anime_id - the anime that this user has rated.
# rating - rating out of 10 this user has assigned (-1 if the user watched it but didn't assign a rating).
# ## Acknowledgements
# Thanks to myanimelist.net API for providing anime data and user ratings.
import seaborn as sns
import matplotlib.pyplot as plt
# top_5_anime_rating
# plt.figure(figsize=(20, 10))
# sns.barplot(x="rating", y="name", data=Top_10_anime)
# # visualization of the top 5 anime members
Top_10_anime_members = Top_10_anime_members.head(5)
plt.figure()
sns.barplot(x="members", y="name", data=Top_10_anime_members, palette="viridis")
plt.title("top 5 anime members")
plt.xlabel("members")
plt.ylabel("name")
plt.show()
# # visualization of the relation between episodes and rating
plt.figure(figsize=(20, 10))
sns.lineplot(x="episodes", y="rating", data=df2.head(20), color="red")
plt.title("the relation between episodes and rating")
plt.xlabel("episodes")
plt.ylabel("rating")
| false | 0 | 974 | 0 | 1,295 | 974 |
||
129952596
|
#!pip install pretrainedmodels
#!pip install albumentations
#!pip install --upgrade efficientnet-pytorch
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cv2
import torch
import torch.nn as nn
import torchvision
from torchvision import transforms
from torchvision.models import efficientnet_b7
from torch.utils.data import Dataset, DataLoader
from tqdm import tqdm
train_csv_path = "../input/happy-whale-and-dolphin/train.csv"
train_df = pd.read_csv(train_csv_path)
train_df.head()
class CustomDataset(Dataset):
def __init__(self, root_dir, df, label_to_id, transform):
self.root_dir = root_dir
self.df = df
self.label_to_id = label_to_id
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, index):
image_path = os.path.join(self.root_dir, self.df.iloc[index, 0])
image = cv2.imread(image_path, cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
label = self.df.iloc[index, 2]
target = self.label_to_id[label]
image = self.transform(image)
return image, torch.tensor(target)
train_transforms = transforms.Compose(
[
transforms.ToPILImage(),
transforms.Resize((600, 600)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
unique_individual_ids = train_df.individual_id.unique()
label_to_id = {}
id_to_label = {}
idx = 0
for label in unique_individual_ids:
label_to_id[label] = idx
id_to_label[idx] = label
idx += 1
root_dir = "../input/happy-whale-and-dolphin/train_images"
dataset = CustomDataset(root_dir, train_df, label_to_id, train_transforms)
train_loader = DataLoader(dataset, batch_size=8, shuffle=True)
model = efficientnet_b7(weights="IMAGENET1K_V1")
model.classifier = nn.Sequential(nn.Dropout(), nn.Linear(2560, len(label_to_id)))
for name, param in model.named_parameters():
if "classifier" not in name:
param.requires_grad = False
# images, targets = next(iter(train_loader))
# images.shape, targets.shape
# output = model(images.cpu())
# output.shape
device = "cuda" if torch.cuda.is_available() else "cpu"
device
# Train model
model.to(device)
EPOCHS = 1
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
last_train_loss = 0
for epoch in range(EPOCHS):
print(f"Epoch: {epoch+1}/{EPOCHS}")
correct = 0
total = 0
losses = []
for batch_idx, data in enumerate(tqdm(train_loader)):
images, targets = data
images = images.to(device)
targets = targets.to(device)
output = model(images) # (batch_size, num_classes)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
_, pred = torch.max(output, 1)
correct += (pred == targets).sum().item()
total += pred.size(0)
losses.append(loss.item())
train_loss = np.mean(losses)
train_acc = correct * 1.0 / total
last_train_loss = train_loss
print(f"Train Loss: {train_loss}\tTrain Acc: {train_acc}")
torch.save(
{
"epoch": EPOCHS,
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
"loss": last_train_loss,
},
"last_checkpoint.pth.tar",
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/952/129952596.ipynb
| null | null |
[{"Id": 129952596, "ScriptId": 38513572, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4592893, "CreationDate": "05/17/2023 16:52:48", "VersionNumber": 1.0, "Title": "Szymon-Wale\u0144", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 137.0, "LinesInsertedFromPrevious": 137.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
#!pip install pretrainedmodels
#!pip install albumentations
#!pip install --upgrade efficientnet-pytorch
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cv2
import torch
import torch.nn as nn
import torchvision
from torchvision import transforms
from torchvision.models import efficientnet_b7
from torch.utils.data import Dataset, DataLoader
from tqdm import tqdm
train_csv_path = "../input/happy-whale-and-dolphin/train.csv"
train_df = pd.read_csv(train_csv_path)
train_df.head()
class CustomDataset(Dataset):
def __init__(self, root_dir, df, label_to_id, transform):
self.root_dir = root_dir
self.df = df
self.label_to_id = label_to_id
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, index):
image_path = os.path.join(self.root_dir, self.df.iloc[index, 0])
image = cv2.imread(image_path, cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
label = self.df.iloc[index, 2]
target = self.label_to_id[label]
image = self.transform(image)
return image, torch.tensor(target)
train_transforms = transforms.Compose(
[
transforms.ToPILImage(),
transforms.Resize((600, 600)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
unique_individual_ids = train_df.individual_id.unique()
label_to_id = {}
id_to_label = {}
idx = 0
for label in unique_individual_ids:
label_to_id[label] = idx
id_to_label[idx] = label
idx += 1
root_dir = "../input/happy-whale-and-dolphin/train_images"
dataset = CustomDataset(root_dir, train_df, label_to_id, train_transforms)
train_loader = DataLoader(dataset, batch_size=8, shuffle=True)
model = efficientnet_b7(weights="IMAGENET1K_V1")
model.classifier = nn.Sequential(nn.Dropout(), nn.Linear(2560, len(label_to_id)))
for name, param in model.named_parameters():
if "classifier" not in name:
param.requires_grad = False
# images, targets = next(iter(train_loader))
# images.shape, targets.shape
# output = model(images.cpu())
# output.shape
device = "cuda" if torch.cuda.is_available() else "cpu"
device
# Train model
model.to(device)
EPOCHS = 1
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
last_train_loss = 0
for epoch in range(EPOCHS):
print(f"Epoch: {epoch+1}/{EPOCHS}")
correct = 0
total = 0
losses = []
for batch_idx, data in enumerate(tqdm(train_loader)):
images, targets = data
images = images.to(device)
targets = targets.to(device)
output = model(images) # (batch_size, num_classes)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
_, pred = torch.max(output, 1)
correct += (pred == targets).sum().item()
total += pred.size(0)
losses.append(loss.item())
train_loss = np.mean(losses)
train_acc = correct * 1.0 / total
last_train_loss = train_loss
print(f"Train Loss: {train_loss}\tTrain Acc: {train_acc}")
torch.save(
{
"epoch": EPOCHS,
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
"loss": last_train_loss,
},
"last_checkpoint.pth.tar",
)
| false | 0 | 1,060 | 0 | 1,060 | 1,060 |
||
129952366
|
<jupyter_start><jupyter_text>Music Industry Sales (1973 - 2019)
# About this dataset
> Over the last two decades, the music industry faced many disruptions due to new forms of media. From the old vinyl disks all the way to free and paid music streaming as we have right now, with noticeable changes caused by Napster, iTunes Store and Spotify to name a few.
All of those changes not only impacted the consumer but also the profitability of the music industry as a whole.
With this dataset, you can explore how music sales varied as disruptions took place within the industry.
# How to use
> - Explore music industry revenues by media and through time
- [More datasets](https://www.kaggle.com/andrewmvd/datasets)
# Acknowledgements
> ### Sources
- [Visualcapitalist](https://www.visualcapitalist.com/music-industry-sales/)
- [Riaa](https://www.riaa.com/u-s-sales-database/)
> ### License
License was not specified at the source
> ### Splash banner
Photo by [Thomas Kelley](https://unsplash.com/@thkelley?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/photos/2ZWCDBuw1B8).
> ### Splash icons
Icons made by [srip](https://www.flaticon.com/authors/srip) and [Freepik](https://www.flaticon.com/authors/freepik) from [www.flaticon.com](https://www.flaticon.com/).
Kaggle dataset identifier: music-sales
<jupyter_script># # Music Sales Analysis (1973 - 2019)
# We will be delving into just how much music sales have changed over the years. Specifically, what has caused those changes. Below you'll find a data dictionary:
# | Column name | Column Description |
# |-------------------|--------------------------|
# | format | Media used |
# | metric | Metric for format column |
# | year | Year |
# | number_of_records | Unit (all rows = 1) |
# | value_actual | Amount of metric column |
import pandas as pd
def import_music_sales():
return pd.read_csv("/kaggle/input/music-sales/musicdata.csv")
music_sales = import_music_sales()
music_sales
music_sales.shape
music_sales.columns
list(music_sales.columns)
music_sales.dtypes
music_sales.describe().round(2)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/952/129952366.ipynb
|
music-sales
|
andrewmvd
|
[{"Id": 129952366, "ScriptId": 38333326, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11705015, "CreationDate": "05/17/2023 16:50:40", "VersionNumber": 3.0, "Title": "DRAFT - Music Sales Analysis", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 29.0, "LinesInsertedFromPrevious": 27.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 2.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 186385175, "KernelVersionId": 129952366, "SourceDatasetVersionId": 1277753}]
|
[{"Id": 1277753, "DatasetId": 736953, "DatasourceVersionId": 1309769, "CreatorUserId": 793761, "LicenseName": "Other (specified in description)", "CreationDate": "06/24/2020 19:16:08", "VersionNumber": 1.0, "Title": "Music Industry Sales (1973 - 2019)", "Slug": "music-sales", "Subtitle": "45+ years of sales by media, from vinyl to streaming", "Description": "# About this dataset\n> Over the last two decades, the music industry faced many disruptions due to new forms of media. From the old vinyl disks all the way to free and paid music streaming as we have right now, with noticeable changes caused by Napster, iTunes Store and Spotify to name a few.\nAll of those changes not only impacted the consumer but also the profitability of the music industry as a whole.\nWith this dataset, you can explore how music sales varied as disruptions took place within the industry.\n\n\n# How to use\n> - Explore music industry revenues by media and through time\n- [More datasets](https://www.kaggle.com/andrewmvd/datasets)\n\n# Acknowledgements\n> ### Sources\n- [Visualcapitalist](https://www.visualcapitalist.com/music-industry-sales/)\n- [Riaa](https://www.riaa.com/u-s-sales-database/)\n\n> ### License\nLicense was not specified at the source\n\n> ### Splash banner\nPhoto by [Thomas Kelley](https://unsplash.com/@thkelley?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/photos/2ZWCDBuw1B8).\n\n> ### Splash icons\nIcons made by [srip](https://www.flaticon.com/authors/srip) and [Freepik](https://www.flaticon.com/authors/freepik) from [www.flaticon.com](https://www.flaticon.com/).", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 736953, "CreatorUserId": 793761, "OwnerUserId": 793761.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1277753.0, "CurrentDatasourceVersionId": 1309769.0, "ForumId": 751820, "Type": 2, "CreationDate": "06/24/2020 19:16:08", "LastActivityDate": "06/24/2020", "TotalViews": 25930, "TotalDownloads": 2576, "TotalVotes": 42, "TotalKernels": 5}]
|
[{"Id": 793761, "UserName": "andrewmvd", "DisplayName": "Larxel", "RegisterDate": "11/15/2016", "PerformanceTier": 4}]
|
# # Music Sales Analysis (1973 - 2019)
# We will be delving into just how much music sales have changed over the years. Specifically, what has caused those changes. Below you'll find a data dictionary:
# | Column name | Column Description |
# |-------------------|--------------------------|
# | format | Media used |
# | metric | Metric for format column |
# | year | Year |
# | number_of_records | Unit (all rows = 1) |
# | value_actual | Amount of metric column |
import pandas as pd
def import_music_sales():
return pd.read_csv("/kaggle/input/music-sales/musicdata.csv")
music_sales = import_music_sales()
music_sales
music_sales.shape
music_sales.columns
list(music_sales.columns)
music_sales.dtypes
music_sales.describe().round(2)
| false | 1 | 225 | 1 | 637 | 225 |
||
129952493
|
<jupyter_start><jupyter_text>Possum Regression
### Context
Can you use your regression skills to predict the age of a possum, its head length, whether it is male or female? This classic practice regression dataset comes originally from the [DAAG R package](https://cran.r-project.org/web/packages/DAAG/index.html) (datasets used in examples and exercises in the book Maindonald, J.H. and Braun, W.J. (2003, 2007, 2010) "Data Analysis and Graphics Using R"). This dataset is also used in the [OpenIntro Statistics](https://www.openintro.org/book/os/) book chapter 8 *Introduction to linear regression*.
### Content
From the DAAG R package: "*The possum data frame consists of nine morphometric measurements on each of 104 mountain brushtail possums, trapped at seven sites from Southern Victoria to central Queensland*."
Kaggle dataset identifier: openintro-possum
<jupyter_code>import pandas as pd
df = pd.read_csv('openintro-possum/possum.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 104 entries, 0 to 103
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 case 104 non-null int64
1 site 104 non-null int64
2 Pop 104 non-null object
3 sex 104 non-null object
4 age 102 non-null float64
5 hdlngth 104 non-null float64
6 skullw 104 non-null float64
7 totlngth 104 non-null float64
8 taill 104 non-null float64
9 footlgth 103 non-null float64
10 earconch 104 non-null float64
11 eye 104 non-null float64
12 chest 104 non-null float64
13 belly 104 non-null float64
dtypes: float64(10), int64(2), object(2)
memory usage: 11.5+ KB
<jupyter_text>Examples:
{
"case": 1,
"site": 1,
"Pop": "Vic",
"sex": "m",
"age": 8,
"hdlngth": 94.1,
"skullw": 60.4,
"totlngth": 89.0,
"taill": 36.0,
"footlgth": 74.5,
"earconch": 54.5,
"eye": 15.2,
"chest": 28.0,
"belly": 36
}
{
"case": 2,
"site": 1,
"Pop": "Vic",
"sex": "f",
"age": 6,
"hdlngth": 92.5,
"skullw": 57.6,
"totlngth": 91.5,
"taill": 36.5,
"footlgth": 72.5,
"earconch": 51.2,
"eye": 16.0,
"chest": 28.5,
"belly": 33
}
{
"case": 3,
"site": 1,
"Pop": "Vic",
"sex": "f",
"age": 6,
"hdlngth": 94.0,
"skullw": 60.0,
"totlngth": 95.5,
"taill": 39.0,
"footlgth": 75.4,
"earconch": 51.9,
"eye": 15.5,
"chest": 30.0,
"belly": 34
}
{
"case": 4,
"site": 1,
"Pop": "Vic",
"sex": "f",
"age": 6,
"hdlngth": 93.2,
"skullw": 57.1,
"totlngth": 92.0,
"taill": 38.0,
"footlgth": 76.1,
"earconch": 52.2,
"eye": 15.2,
"chest": 28.0,
"belly": 34
}
<jupyter_script># Checkout my blog on **Lazy Predict** [here](https://medium.com/@Kavya2099/the-lazy-way-to-win-at-machine-learning-introducing-lazy-predict-f484e004dcd0)!
#
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/openintro-possum/possum.csv")
df.head()
df.shape
df.describe()
df.isnull().sum()
df.dropna(axis=0, inplace=True)
df.isnull().sum()
# Removed null values. Here, we are going to find the age of possum(Regression). So we'll set that to be the target feature.
X = df.drop(["age"], axis=1)
y = df.age
# Installing lazy predict Regressor and training and evaluating models
from sklearn.model_selection import train_test_split
from lazypredict.Supervised import LazyRegressor
# Split the dataset into training, validation, and testing sets
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=0.25, random_state=42
)
# Use Lazy Predict to evaluate multiple models on the training set
clf = LazyRegressor(verbose=0, ignore_warnings=True, custom_metric=None)
models, predictions = clf.fit(X_train, X_val, y_train, y_val)
# Checking models
models
# Testing RFRegressor in Test data
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
clf = RandomForestRegressor(random_state=1)
X = pd.get_dummies(X)
# Split the dataset into training, validation, and testing sets
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=0.25, random_state=42
)
# predict the results
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
MSE = mean_squared_error(y_test, y_pred)
MSE
# Checking in randaom test data
# checking the results
X_test.iloc[3]
print("Actual age: ", y_test.iloc[3])
print("Predicted age: ", y_pred[3])
print("Difference : ", y_test.iloc[3] - y_pred[3])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/952/129952493.ipynb
|
openintro-possum
|
abrambeyer
|
[{"Id": 129952493, "ScriptId": 38516576, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11183057, "CreationDate": "05/17/2023 16:51:53", "VersionNumber": 2.0, "Title": "Possum Regression with lazy predict", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 96.0, "LinesInsertedFromPrevious": 3.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 93.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186385441, "KernelVersionId": 129952493, "SourceDatasetVersionId": 2532158}]
|
[{"Id": 2532158, "DatasetId": 1534513, "DatasourceVersionId": 2575036, "CreatorUserId": 432563, "LicenseName": "CC0: Public Domain", "CreationDate": "08/17/2021 01:15:54", "VersionNumber": 1.0, "Title": "Possum Regression", "Slug": "openintro-possum", "Subtitle": "Get Your Feet Wet With This Beginner Regression Dataset!", "Description": "### Context\n\nCan you use your regression skills to predict the age of a possum, its head length, whether it is male or female? This classic practice regression dataset comes originally from the [DAAG R package](https://cran.r-project.org/web/packages/DAAG/index.html) (datasets used in examples and exercises in the book Maindonald, J.H. and Braun, W.J. (2003, 2007, 2010) \"Data Analysis and Graphics Using R\"). This dataset is also used in the [OpenIntro Statistics](https://www.openintro.org/book/os/) book chapter 8 *Introduction to linear regression*.\n\n### Content\n\nFrom the DAAG R package: \"*The possum data frame consists of nine morphometric measurements on each of 104 mountain brushtail possums, trapped at seven sites from Southern Victoria to central Queensland*.\"\n\n\n### Acknowledgements\n\nData originally found in the [DAAG R package](https://cran.r-project.org/web/packages/DAAG/index.html) and used in the book Maindonald, J.H. and Braun, W.J. (2003, 2007, 2010) \"Data Analysis and Graphics Using R\"). \n\nA subset of the data was also put together for the [OpenIntro Statistics](https://www.openintro.org/book/os/) book chapter 8 *Introduction to linear regression*.\n\n***Original Source of dataset:***\n*Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. 1995. Morphological\nvariation among columns of the mountain brushtail possum, Trichosurus caninus Ogilby (Phalangeridae: Marsupiala). Australian Journal of Zoology 43: 449-458.*\n\n### Inspiration\n\nGet your feet wet with regression techniques here on Kaggle by using this dataset. Perfect for beginners since the OpenIntro Statistics book does a good explanation in Chapter 8.\n\n* Can we use total length to predict a possum's head length?\n* Which possum body dimensions are most correlated with age and sex?\n* Can we classify a possum's sex by its body dimensions and location?\n* Can we predict a possum's trapping location from its body dimensions?", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1534513, "CreatorUserId": 432563, "OwnerUserId": 432563.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2532158.0, "CurrentDatasourceVersionId": 2575036.0, "ForumId": 1554365, "Type": 2, "CreationDate": "08/17/2021 01:15:54", "LastActivityDate": "08/17/2021", "TotalViews": 42492, "TotalDownloads": 6049, "TotalVotes": 74, "TotalKernels": 54}]
|
[{"Id": 432563, "UserName": "abrambeyer", "DisplayName": "ABeyer", "RegisterDate": "10/01/2015", "PerformanceTier": 1}]
|
# Checkout my blog on **Lazy Predict** [here](https://medium.com/@Kavya2099/the-lazy-way-to-win-at-machine-learning-introducing-lazy-predict-f484e004dcd0)!
#
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/openintro-possum/possum.csv")
df.head()
df.shape
df.describe()
df.isnull().sum()
df.dropna(axis=0, inplace=True)
df.isnull().sum()
# Removed null values. Here, we are going to find the age of possum(Regression). So we'll set that to be the target feature.
X = df.drop(["age"], axis=1)
y = df.age
# Installing lazy predict Regressor and training and evaluating models
from sklearn.model_selection import train_test_split
from lazypredict.Supervised import LazyRegressor
# Split the dataset into training, validation, and testing sets
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=0.25, random_state=42
)
# Use Lazy Predict to evaluate multiple models on the training set
clf = LazyRegressor(verbose=0, ignore_warnings=True, custom_metric=None)
models, predictions = clf.fit(X_train, X_val, y_train, y_val)
# Checking models
models
# Testing RFRegressor in Test data
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
clf = RandomForestRegressor(random_state=1)
X = pd.get_dummies(X)
# Split the dataset into training, validation, and testing sets
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=0.25, random_state=42
)
# predict the results
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
MSE = mean_squared_error(y_test, y_pred)
MSE
# Checking in randaom test data
# checking the results
X_test.iloc[3]
print("Actual age: ", y_test.iloc[3])
print("Predicted age: ", y_pred[3])
print("Difference : ", y_test.iloc[3] - y_pred[3])
|
[{"openintro-possum/possum.csv": {"column_names": "[\"case\", \"site\", \"Pop\", \"sex\", \"age\", \"hdlngth\", \"skullw\", \"totlngth\", \"taill\", \"footlgth\", \"earconch\", \"eye\", \"chest\", \"belly\"]", "column_data_types": "{\"case\": \"int64\", \"site\": \"int64\", \"Pop\": \"object\", \"sex\": \"object\", \"age\": \"float64\", \"hdlngth\": \"float64\", \"skullw\": \"float64\", \"totlngth\": \"float64\", \"taill\": \"float64\", \"footlgth\": \"float64\", \"earconch\": \"float64\", \"eye\": \"float64\", \"chest\": \"float64\", \"belly\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 104 entries, 0 to 103\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 case 104 non-null int64 \n 1 site 104 non-null int64 \n 2 Pop 104 non-null object \n 3 sex 104 non-null object \n 4 age 102 non-null float64\n 5 hdlngth 104 non-null float64\n 6 skullw 104 non-null float64\n 7 totlngth 104 non-null float64\n 8 taill 104 non-null float64\n 9 footlgth 103 non-null float64\n 10 earconch 104 non-null float64\n 11 eye 104 non-null float64\n 12 chest 104 non-null float64\n 13 belly 104 non-null float64\ndtypes: float64(10), int64(2), object(2)\nmemory usage: 11.5+ KB\n", "summary": "{\"case\": {\"count\": 104.0, \"mean\": 52.5, \"std\": 30.166206257996713, \"min\": 1.0, \"25%\": 26.75, \"50%\": 52.5, \"75%\": 78.25, \"max\": 104.0}, \"site\": {\"count\": 104.0, \"mean\": 3.625, \"std\": 2.349085754819339, \"min\": 1.0, \"25%\": 1.0, \"50%\": 3.0, \"75%\": 6.0, \"max\": 7.0}, \"age\": {\"count\": 102.0, \"mean\": 3.8333333333333335, \"std\": 1.9092444897006104, \"min\": 1.0, \"25%\": 2.25, \"50%\": 3.0, \"75%\": 5.0, \"max\": 9.0}, \"hdlngth\": {\"count\": 104.0, \"mean\": 92.60288461538462, \"std\": 3.573349486079402, \"min\": 82.5, \"25%\": 90.675, \"50%\": 92.8, \"75%\": 94.725, \"max\": 103.1}, \"skullw\": {\"count\": 104.0, \"mean\": 56.88365384615384, \"std\": 3.1134256903770203, \"min\": 50.0, \"25%\": 54.975, \"50%\": 56.349999999999994, \"75%\": 58.1, \"max\": 68.6}, \"totlngth\": {\"count\": 104.0, \"mean\": 87.08846153846154, \"std\": 4.310549436569344, \"min\": 75.0, \"25%\": 84.0, \"50%\": 88.0, \"75%\": 90.0, \"max\": 96.5}, \"taill\": {\"count\": 104.0, \"mean\": 37.00961538461539, \"std\": 1.959518428592603, \"min\": 32.0, \"25%\": 35.875, \"50%\": 37.0, \"75%\": 38.0, \"max\": 43.0}, \"footlgth\": {\"count\": 103.0, \"mean\": 68.45922330097088, \"std\": 4.395305804641412, \"min\": 60.3, \"25%\": 64.6, \"50%\": 68.0, \"75%\": 72.5, \"max\": 77.9}, \"earconch\": {\"count\": 104.0, \"mean\": 48.13076923076923, \"std\": 4.109380151285827, \"min\": 40.3, \"25%\": 44.8, \"50%\": 46.8, \"75%\": 52.0, \"max\": 56.2}, \"eye\": {\"count\": 104.0, \"mean\": 15.046153846153846, \"std\": 1.0503742353818448, \"min\": 12.8, \"25%\": 14.4, \"50%\": 14.9, \"75%\": 15.725, \"max\": 17.8}, \"chest\": {\"count\": 104.0, \"mean\": 27.0, \"std\": 2.0455967391979963, \"min\": 22.0, \"25%\": 25.5, \"50%\": 27.0, \"75%\": 28.0, \"max\": 32.0}, \"belly\": {\"count\": 104.0, \"mean\": 32.58653846153846, \"std\": 2.7619487172923667, \"min\": 25.0, \"25%\": 31.0, \"50%\": 32.5, \"75%\": 34.125, \"max\": 40.0}}", "examples": "{\"case\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"site\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1},\"Pop\":{\"0\":\"Vic\",\"1\":\"Vic\",\"2\":\"Vic\",\"3\":\"Vic\"},\"sex\":{\"0\":\"m\",\"1\":\"f\",\"2\":\"f\",\"3\":\"f\"},\"age\":{\"0\":8.0,\"1\":6.0,\"2\":6.0,\"3\":6.0},\"hdlngth\":{\"0\":94.1,\"1\":92.5,\"2\":94.0,\"3\":93.2},\"skullw\":{\"0\":60.4,\"1\":57.6,\"2\":60.0,\"3\":57.1},\"totlngth\":{\"0\":89.0,\"1\":91.5,\"2\":95.5,\"3\":92.0},\"taill\":{\"0\":36.0,\"1\":36.5,\"2\":39.0,\"3\":38.0},\"footlgth\":{\"0\":74.5,\"1\":72.5,\"2\":75.4,\"3\":76.1},\"earconch\":{\"0\":54.5,\"1\":51.2,\"2\":51.9,\"3\":52.2},\"eye\":{\"0\":15.2,\"1\":16.0,\"2\":15.5,\"3\":15.2},\"chest\":{\"0\":28.0,\"1\":28.5,\"2\":30.0,\"3\":28.0},\"belly\":{\"0\":36.0,\"1\":33.0,\"2\":34.0,\"3\":34.0}}"}}]
| true | 1 |
<start_data_description><data_path>openintro-possum/possum.csv:
<column_names>
['case', 'site', 'Pop', 'sex', 'age', 'hdlngth', 'skullw', 'totlngth', 'taill', 'footlgth', 'earconch', 'eye', 'chest', 'belly']
<column_types>
{'case': 'int64', 'site': 'int64', 'Pop': 'object', 'sex': 'object', 'age': 'float64', 'hdlngth': 'float64', 'skullw': 'float64', 'totlngth': 'float64', 'taill': 'float64', 'footlgth': 'float64', 'earconch': 'float64', 'eye': 'float64', 'chest': 'float64', 'belly': 'float64'}
<dataframe_Summary>
{'case': {'count': 104.0, 'mean': 52.5, 'std': 30.166206257996713, 'min': 1.0, '25%': 26.75, '50%': 52.5, '75%': 78.25, 'max': 104.0}, 'site': {'count': 104.0, 'mean': 3.625, 'std': 2.349085754819339, 'min': 1.0, '25%': 1.0, '50%': 3.0, '75%': 6.0, 'max': 7.0}, 'age': {'count': 102.0, 'mean': 3.8333333333333335, 'std': 1.9092444897006104, 'min': 1.0, '25%': 2.25, '50%': 3.0, '75%': 5.0, 'max': 9.0}, 'hdlngth': {'count': 104.0, 'mean': 92.60288461538462, 'std': 3.573349486079402, 'min': 82.5, '25%': 90.675, '50%': 92.8, '75%': 94.725, 'max': 103.1}, 'skullw': {'count': 104.0, 'mean': 56.88365384615384, 'std': 3.1134256903770203, 'min': 50.0, '25%': 54.975, '50%': 56.349999999999994, '75%': 58.1, 'max': 68.6}, 'totlngth': {'count': 104.0, 'mean': 87.08846153846154, 'std': 4.310549436569344, 'min': 75.0, '25%': 84.0, '50%': 88.0, '75%': 90.0, 'max': 96.5}, 'taill': {'count': 104.0, 'mean': 37.00961538461539, 'std': 1.959518428592603, 'min': 32.0, '25%': 35.875, '50%': 37.0, '75%': 38.0, 'max': 43.0}, 'footlgth': {'count': 103.0, 'mean': 68.45922330097088, 'std': 4.395305804641412, 'min': 60.3, '25%': 64.6, '50%': 68.0, '75%': 72.5, 'max': 77.9}, 'earconch': {'count': 104.0, 'mean': 48.13076923076923, 'std': 4.109380151285827, 'min': 40.3, '25%': 44.8, '50%': 46.8, '75%': 52.0, 'max': 56.2}, 'eye': {'count': 104.0, 'mean': 15.046153846153846, 'std': 1.0503742353818448, 'min': 12.8, '25%': 14.4, '50%': 14.9, '75%': 15.725, 'max': 17.8}, 'chest': {'count': 104.0, 'mean': 27.0, 'std': 2.0455967391979963, 'min': 22.0, '25%': 25.5, '50%': 27.0, '75%': 28.0, 'max': 32.0}, 'belly': {'count': 104.0, 'mean': 32.58653846153846, 'std': 2.7619487172923667, 'min': 25.0, '25%': 31.0, '50%': 32.5, '75%': 34.125, 'max': 40.0}}
<dataframe_info>
RangeIndex: 104 entries, 0 to 103
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 case 104 non-null int64
1 site 104 non-null int64
2 Pop 104 non-null object
3 sex 104 non-null object
4 age 102 non-null float64
5 hdlngth 104 non-null float64
6 skullw 104 non-null float64
7 totlngth 104 non-null float64
8 taill 104 non-null float64
9 footlgth 103 non-null float64
10 earconch 104 non-null float64
11 eye 104 non-null float64
12 chest 104 non-null float64
13 belly 104 non-null float64
dtypes: float64(10), int64(2), object(2)
memory usage: 11.5+ KB
<some_examples>
{'case': {'0': 1, '1': 2, '2': 3, '3': 4}, 'site': {'0': 1, '1': 1, '2': 1, '3': 1}, 'Pop': {'0': 'Vic', '1': 'Vic', '2': 'Vic', '3': 'Vic'}, 'sex': {'0': 'm', '1': 'f', '2': 'f', '3': 'f'}, 'age': {'0': 8.0, '1': 6.0, '2': 6.0, '3': 6.0}, 'hdlngth': {'0': 94.1, '1': 92.5, '2': 94.0, '3': 93.2}, 'skullw': {'0': 60.4, '1': 57.6, '2': 60.0, '3': 57.1}, 'totlngth': {'0': 89.0, '1': 91.5, '2': 95.5, '3': 92.0}, 'taill': {'0': 36.0, '1': 36.5, '2': 39.0, '3': 38.0}, 'footlgth': {'0': 74.5, '1': 72.5, '2': 75.4, '3': 76.1}, 'earconch': {'0': 54.5, '1': 51.2, '2': 51.9, '3': 52.2}, 'eye': {'0': 15.2, '1': 16.0, '2': 15.5, '3': 15.2}, 'chest': {'0': 28.0, '1': 28.5, '2': 30.0, '3': 28.0}, 'belly': {'0': 36.0, '1': 33.0, '2': 34.0, '3': 34.0}}
<end_description>
| 867 | 0 | 2,020 | 867 |
129952474
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import plotly.express as px
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
waste_df = pd.read_csv(
"/kaggle/input/municipal-solid-waste-landfills-waste-qnty-detail/cc75441b-e4d9-4394-899d-91df0dd3e545.csv"
)
waste_df.head()
waste_df.isna().sum()
waste_df.year_waste_disposed.value_counts()
waste_df.wastecompositiontype.value_counts()
waste_df.describe()
methane_fracs_df = waste_df[waste_df["methane_fract_annl_val"].notna()]
methane_fracs_df.info()
fig = px.scatter(
methane_fracs_df,
title="Methane Frac by Waste Type",
x="wastecompositiontype",
y="methane_fract_annl_val",
color="wastecompositiontype",
symbol="wastecompositiontype",
)
fig.update_layout(
title_x=0.5,
xaxis_title="Waste Composition Type",
yaxis_title="Methane Fraction Value",
legend_title="Waste Type",
)
fig.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/952/129952474.ipynb
| null | null |
[{"Id": 129952474, "ScriptId": 38655557, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10897806, "CreationDate": "05/17/2023 16:51:40", "VersionNumber": 1.0, "Title": "MSW Landfills", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 35.0, "LinesInsertedFromPrevious": 35.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import plotly.express as px
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
waste_df = pd.read_csv(
"/kaggle/input/municipal-solid-waste-landfills-waste-qnty-detail/cc75441b-e4d9-4394-899d-91df0dd3e545.csv"
)
waste_df.head()
waste_df.isna().sum()
waste_df.year_waste_disposed.value_counts()
waste_df.wastecompositiontype.value_counts()
waste_df.describe()
methane_fracs_df = waste_df[waste_df["methane_fract_annl_val"].notna()]
methane_fracs_df.info()
fig = px.scatter(
methane_fracs_df,
title="Methane Frac by Waste Type",
x="wastecompositiontype",
y="methane_fract_annl_val",
color="wastecompositiontype",
symbol="wastecompositiontype",
)
fig.update_layout(
title_x=0.5,
xaxis_title="Waste Composition Type",
yaxis_title="Methane Fraction Value",
legend_title="Waste Type",
)
fig.show()
| false | 0 | 492 | 0 | 492 | 492 |
||
129508994
|
<jupyter_start><jupyter_text>Biomechanical features of orthopedic patients
### Context
**The data have been organized in two different but related classification tasks.**
- column_3C_weka.csv (file with three class labels)
- The first task consists in classifying patients as belonging to one out of three categories: Normal (100 patients), Disk Hernia (60 patients) or Spondylolisthesis (150 patients).
- column_2C_weka.csv (file with two class labels)
- For the second task, the categories Disk Hernia and Spondylolisthesis were merged into a single category labelled as 'abnormal'. Thus, the second task consists in classifying patients as belonging to one out of two categories: Normal (100 patients) or Abnormal (210 patients).
### Content
Field Descriptions:
Each patient is represented in the data set by six biomechanical attributes derived from the shape and orientation of the pelvis and lumbar spine (each one is a column):
- pelvic incidence
- pelvic tilt
- lumbar lordosis angle
- sacral slope
- pelvic radius
- grade of spondylolisthesis
Kaggle dataset identifier: biomechanical-features-of-orthopedic-patients
<jupyter_script># # DATA SCIENTIST
# **In this tutorial, I only explain you what you need to be a data scientist neither more nor less.**
# Data scientist need to have these skills:
# 1. Basic Tools: Like python, R or SQL. You do not need to know everything. What you only need is to learn how to use **python**
# 1. Basic Statistics: Like mean, median or standart deviation. If you know basic statistics, you can use **python** easily.
# 1. Data Munging: Working with messy and difficult data. Like a inconsistent date and string formatting. As you guess, **python** helps us.
# 1. Data Visualization: Title is actually explanatory. We will visualize the data with **python** like matplot and seaborn libraries.
# 1. Machine Learning: You do not need to understand math behind the machine learning technique. You only need is understanding basics of machine learning and learning how to implement it while using **python**.
# ### As a summary we will learn python to be data scientist !!!
# ## For parts 1, 2, 3, 4, 5 and 6, look at DATA SCIENCE TUTORIAL for BEGINNERS
# https://www.kaggle.com/kanncaa1/data-sciencetutorial-for-beginners/
# ## In this tutorial, I am not going to learn machine learning to you, I am going to explain how to learn something by yourself.
# # *Confucius: Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime*
# **Content:**
# 1. Introduction to Python:
# 1. Matplotlib
# 1. Dictionaries
# 1. Pandas
# 1. Logic, control flow and filtering
# 1. Loop data structures
# 1. Python Data Science Toolbox:
# 1. User defined function
# 1. Scope
# 1. Nested function
# 1. Default and flexible arguments
# 1. Lambda function
# 1. Anonymous function
# 1. Iterators
# 1. List comprehension
# 1. Cleaning Data
# 1. Diagnose data for cleaning
# 1. Explotary data analysis
# 1. Visual exploratory data analysis
# 1. Tidy data
# 1. Pivoting data
# 1. Concatenating data
# 1. Data types
# 1. Missing data and testing with assert
# 1. Pandas Foundation
# 1. Review of pandas
# 1. Building data frames from scratch
# 1. Visual exploratory data analysis
# 1. Statistical explatory data analysis
# 1. Indexing pandas time series
# 1. Resampling pandas time series
# 1. Manipulating Data Frames with Pandas
# 1. Indexing data frames
# 1. Slicing data frames
# 1. Filtering data frames
# 1. Transforming data frames
# 1. Index objects and labeled data
# 1. Hierarchical indexing
# 1. Pivoting data frames
# 1. Stacking and unstacking data frames
# 1. Melting data frames
# 1. Categoricals and groupby
# 1. Data Visualization
# 1. Seaborn: https://www.kaggle.com/kanncaa1/seaborn-for-beginners
# 1. Bokeh: https://www.kaggle.com/kanncaa1/interactive-bokeh-tutorial-part-1
# 1. Bokeh: https://www.kaggle.com/kanncaa1/interactive-bokeh-tutorial-part-2
# 1. Statistical Thinking
# 1. https://www.kaggle.com/kanncaa1/basic-statistic-tutorial-for-beginners
# 1. [Machine Learning](#1)
# 1. [Supervised Learning](#2)
# 1. [EDA(Exploratory Data Analysis)](#3)
# 1. [K-Nearest Neighbors (KNN)](#4)
# 1. [Regression](#5)
# 1. [Cross Validation (CV)](#6)
# 1. [ROC Curve](#7)
# 1. [Hyperparameter Tuning](#8)
# 1. [Pre-procesing Data](#9)
# 1. [Unsupervised Learning](#10)
# 1. [Kmeans Clustering](#11)
# 1. [Evaluation of Clustering](#12)
# 1. [Standardization](#13)
# 1. [Hierachy](#14)
# 1. [T - Distributed Stochastic Neighbor Embedding (T - SNE)](#15)
# 1. [Principle Component Analysis (PCA)](#16)
# 1. Deep Learning
# 1. https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners
# 1. Time Series Prediction
# 1. https://www.kaggle.com/kanncaa1/time-series-prediction-tutorial-with-eda
# 1. Deep Learning with Pytorch
# 1. Artificial Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
# 1. Convolutional Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
# 1. Recurrent Neural Network: https://www.kaggle.com/kanncaa1/recurrent-neural-network-with-pytorch
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.svm import SVC
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
# import warnings
import warnings
# ignore warnings
warnings.filterwarnings("ignore")
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
# read csv (comma separated value) into data
data = pd.read_csv("../input/column_2C_weka.csv")
print(plt.style.available) # look at available plot styles
plt.style.use("ggplot")
# to see features and target variable
data.head()
# Well know question is is there any NaN value and length of this data so lets look at info
data.info()
data.describe()
# pd.plotting.scatter_matrix:
# * green: *normal* and red: *abnormal*
# * c: color
# * figsize: figure size
# * diagonal: histohram of each features
# * alpha: opacity
# * s: size of marker
# * marker: marker type
color_list = ["red" if i == "Abnormal" else "green" for i in data.loc[:, "class"]]
pd.plotting.scatter_matrix(
data.loc[:, data.columns != "class"],
c=color_list,
figsize=[15, 15],
diagonal="hist",
alpha=0.5,
s=200,
marker="*",
edgecolor="black",
)
plt.show()
# Okay, as you understand in scatter matrix there are relations between each feature but how many *normal(green)* and *abnormal(red)* classes are there.
# * Searborn library has *countplot()* that counts number of classes
# * Also you can print it with *value_counts()* method
# This data looks like balanced. Actually there is no definiton or numeric value of balanced data but this data is balanced enough for us.
# Now lets learn first classification method KNN
sns.countplot(x="class", data=data)
data.loc[:, "class"].value_counts()
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
import random
newdt = data["class"]
newdt = newdt.replace("Abnormal", 0)
newdt = newdt.replace("Normal", 1)
features = [
"pelvic_incidence",
"pelvic_tilt numeric",
"lumbar_lordosis_angle",
"sacral_slope",
"pelvic_radius",
]
X = data.loc[:, data.columns != "class"]
y = newdt
print(X.head())
print(y.head())
# GA to SVM
# **Algorithm Steps**
# 1. Initialize parameters: a target function (a.k.a. fitness-function), a region of interest.
# 2. Generate a random population with n elements
# 3. Apply genetic algorithm operators to a population: crossover & mutation -> create a new population.
# 4. Create a temopary population: concatenate a population with a new population.
# 5. Calculate fitness-function for each element of a temporary population.
# 6. Sort our temporary population elements by their fitness function.
# 7. Select n best elements from a temporary population. If stop criterions are not over -> return to step 3. Else -> choose the 1st element of the population, it will be our solution.
## !!! "Precomputed kernel parameter: Precomputed matrix must be a square matrix."
# kernel = ["linear", "rbf", "sigmoid", "precomputed"]
kernel = ["linear", "poly", "rbf", "sigmoid"]
degrees = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# gamma = [1, 10, 100,1000]
# c = [1, 10, 100, 200, 300, 400,600,800, 1000]
gamma = [0.0001, 0.001, 0.01, 0.1, 1, 10]
c = [0.0001, 0.001, 0.01, 0.1, 1, 10, 20, 30]
random_state = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
# init_population Fonksiyonu random olarak bir popülasyon oluşturmaya yaramaktadır.
def init_population(size):
population = []
for i in range(size):
chromosome = np.ones(5)
chromosome = [
kernel[random.randint(0, len(kernel) - 1)],
degrees[random.randint(0, len(degrees) - 1)],
gamma[random.randint(0, len(gamma) - 1)],
c[random.randint(0, len(c) - 1)],
random_state[random.randint(0, len(random_state) - 1)],
]
population.append(chromosome)
print("[init_population]: population: ", population)
return population
# SVC Hesaplama metotu
def compute_acc(X, y, kernel, degres, gamma, c, random_state):
clf = SVC(
kernel=str(kernel),
degree=int(degres),
gamma=int(gamma),
C=float(c),
random_state=int(random_state),
)
x_train, x_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=666
)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
acc = accuracy_score(y_test, y_pred)
return acc
# Bütün sınıflandırıcılar için uygunluk değerini döndürmeye yarayan fonksiyondur.
def acc_score(population):
Score = pd.DataFrame({"Parameter": population})
acc = []
acc.append(compute_acc())
Score["Accuracy"] = acc
Score.sort_values(by="Accuracy", ascending=False, inplace=True)
Score.reset_index(drop=True, inplace=True)
return Score
# fitness_score En iyi ebeveynleri uygunluk skorlarıyla birlikte döndürmektedir.
def fitness_score(X, y, population):
scores = []
for chromosome in population:
result = compute_acc(
X,
y,
kernel=str(chromosome[0]),
degres=int(chromosome[1]),
gamma=int(chromosome[2]),
c=float(chromosome[3]),
random_state=int(chromosome[4]),
)
scores.append(result)
scores, population = np.array(scores), np.array(population, dtype=object)
inds = np.argsort(scores)
return list(scores[inds][::-1]), list(population[inds, :][::-1])
# selection fonksiyonu en iyi ebeveynlerin seçilmesini sağlamaktadır
def selection(pop_after_fit, n_parents):
population_nextgen = []
for i in range(n_parents):
population_nextgen.append(pop_after_fit[i])
return population_nextgen
# crossover birinci ebeveyn ile ikinci ebeveyn arasında gen çaprazlanmasını sağlamaktadır
def crossover(pop_after_sel):
pop_nextgen = pop_after_sel
for i in range(0, len(pop_after_sel), 2):
new_par = []
child_1, child_2 = pop_nextgen[i], pop_nextgen[i + 1]
new_par = np.concatenate(
(child_1[: len(child_1) // 2], child_2[len(child_1) // 2 :])
)
pop_nextgen.append(new_par)
return pop_nextgen
# Mutasyon amacıyla verileri karıştırma metotu
def shuffle(arr, mutation_rate):
temp = 0
arr = np.array(arr, dtype=object)
shapes = arr.shape
for i in range(mutation_rate):
rand_arr_1, rnd_arr_in_1 = (
random.randint(0, shapes[0]) - 1,
random.randint(0, shapes[1]) - 1,
)
rand_arr_2 = random.randint(0, shapes[0]) - 1
temp = arr[rand_arr_1][rnd_arr_in_1]
arr[rand_arr_1][rnd_arr_in_1] = arr[rand_arr_2][rnd_arr_in_1]
arr[rand_arr_2][rnd_arr_in_1] = temp
return arr
# mutation random bir şekilde seçilen bit ya da gen üzerinde mutasyon sağlamakta
# ve onları rastgele çevirmektedir.
def mutation(pop_after_cross, mutation_rate, n_feat):
pop_next = shuffle(arr=pop_after_cross, mutation_rate=mutation_rate)
return pop_next
# generations belirtilen nesil sayısı için yukarıdaki tüm fonksiyonları çalıştırmaktadır
# n_feat: Populasyondaki feature sayısı. Bu popülasyon için 5 feature vardır.
# n_parents: Ebeveyn seçimi için sayısı kadar iterasyon. Size ile aynı verilmesi uygun
# mutation_rate: mutasyon oranı
# Yeni generation için tekrar sayısı
def generations(X, y, size, n_feat, n_parents, mutation_rate, n_gen):
best_chromosom = []
best_score = []
population_nextgen = init_population(size)
for i in range(n_gen):
scores, pop_after_fit = fitness_score(X, y, population_nextgen)
print("[generations] Best score in generation", i + 1, ":", scores[:1]) # 2
pop_after_sel = selection(pop_after_fit, n_parents)
pop_after_cross = crossover(pop_after_sel)
population_nextgen = mutation(pop_after_cross, mutation_rate, n_feat)
best_chromosom.append(pop_after_fit[0])
best_score.append(scores[0])
print("[generations] Scores: ", scores, " pop_after_fit: ", pop_after_fit)
return best_chromosom, best_score
from sklearn.svm import SVC
chromo_df_pd, score_pd = generations(
X, y, size=5, n_feat=5, n_parents=5, mutation_rate=0.20, n_gen=2
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/508/129508994.ipynb
|
biomechanical-features-of-orthopedic-patients
| null |
[{"Id": 129508994, "ScriptId": 37558720, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6886723, "CreationDate": "05/14/2023 12:05:52", "VersionNumber": 7.0, "Title": "Genetic Algorithm to SVM", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 308.0, "LinesInsertedFromPrevious": 37.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 271.0, "LinesInsertedFromFork": 144.0, "LinesDeletedFromFork": 656.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 164.0, "TotalVotes": 0}]
|
[{"Id": 185637326, "KernelVersionId": 129508994, "SourceDatasetVersionId": 3987}]
|
[{"Id": 3987, "DatasetId": 2374, "DatasourceVersionId": 3987, "CreatorUserId": 484516, "LicenseName": "CC0: Public Domain", "CreationDate": "09/06/2017 22:54:11", "VersionNumber": 1.0, "Title": "Biomechanical features of orthopedic patients", "Slug": "biomechanical-features-of-orthopedic-patients", "Subtitle": "Classifying patients based on six features", "Description": "### Context\n\n**The data have been organized in two different but related classification tasks.** \n\n- column_3C_weka.csv (file with three class labels)\n - The first task consists in classifying patients as belonging to one out of three categories: Normal (100 patients), Disk Hernia (60 patients) or Spondylolisthesis (150 patients). \n\n- column_2C_weka.csv (file with two class labels)\n - For the second task, the categories Disk Hernia and Spondylolisthesis were merged into a single category labelled as 'abnormal'. Thus, the second task consists in classifying patients as belonging to one out of two categories: Normal (100 patients) or Abnormal (210 patients).\n\n### Content\n\nField Descriptions:\n\nEach patient is represented in the data set by six biomechanical attributes derived from the shape and orientation of the pelvis and lumbar spine (each one is a column): \n\n- pelvic incidence\n- pelvic tilt\n- lumbar lordosis angle\n- sacral slope\n- pelvic radius\n- grade of spondylolisthesis\n\n### Acknowledgements\n\nThe original dataset was downloaded from UCI ML repository:\n\nLichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science\n\nFiles were converted to CSV\n\n### Inspiration\n\nUse these biomechanical features to classify patients according to their labels", "VersionNotes": "Initial release", "TotalCompressedBytes": 51144.0, "TotalUncompressedBytes": 51144.0}]
|
[{"Id": 2374, "CreatorUserId": 484516, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 3987.0, "CurrentDatasourceVersionId": 3987.0, "ForumId": 6351, "Type": 2, "CreationDate": "09/06/2017 22:54:11", "LastActivityDate": "02/05/2018", "TotalViews": 92149, "TotalDownloads": 18837, "TotalVotes": 228, "TotalKernels": 446}]
| null |
# # DATA SCIENTIST
# **In this tutorial, I only explain you what you need to be a data scientist neither more nor less.**
# Data scientist need to have these skills:
# 1. Basic Tools: Like python, R or SQL. You do not need to know everything. What you only need is to learn how to use **python**
# 1. Basic Statistics: Like mean, median or standart deviation. If you know basic statistics, you can use **python** easily.
# 1. Data Munging: Working with messy and difficult data. Like a inconsistent date and string formatting. As you guess, **python** helps us.
# 1. Data Visualization: Title is actually explanatory. We will visualize the data with **python** like matplot and seaborn libraries.
# 1. Machine Learning: You do not need to understand math behind the machine learning technique. You only need is understanding basics of machine learning and learning how to implement it while using **python**.
# ### As a summary we will learn python to be data scientist !!!
# ## For parts 1, 2, 3, 4, 5 and 6, look at DATA SCIENCE TUTORIAL for BEGINNERS
# https://www.kaggle.com/kanncaa1/data-sciencetutorial-for-beginners/
# ## In this tutorial, I am not going to learn machine learning to you, I am going to explain how to learn something by yourself.
# # *Confucius: Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime*
# **Content:**
# 1. Introduction to Python:
# 1. Matplotlib
# 1. Dictionaries
# 1. Pandas
# 1. Logic, control flow and filtering
# 1. Loop data structures
# 1. Python Data Science Toolbox:
# 1. User defined function
# 1. Scope
# 1. Nested function
# 1. Default and flexible arguments
# 1. Lambda function
# 1. Anonymous function
# 1. Iterators
# 1. List comprehension
# 1. Cleaning Data
# 1. Diagnose data for cleaning
# 1. Explotary data analysis
# 1. Visual exploratory data analysis
# 1. Tidy data
# 1. Pivoting data
# 1. Concatenating data
# 1. Data types
# 1. Missing data and testing with assert
# 1. Pandas Foundation
# 1. Review of pandas
# 1. Building data frames from scratch
# 1. Visual exploratory data analysis
# 1. Statistical explatory data analysis
# 1. Indexing pandas time series
# 1. Resampling pandas time series
# 1. Manipulating Data Frames with Pandas
# 1. Indexing data frames
# 1. Slicing data frames
# 1. Filtering data frames
# 1. Transforming data frames
# 1. Index objects and labeled data
# 1. Hierarchical indexing
# 1. Pivoting data frames
# 1. Stacking and unstacking data frames
# 1. Melting data frames
# 1. Categoricals and groupby
# 1. Data Visualization
# 1. Seaborn: https://www.kaggle.com/kanncaa1/seaborn-for-beginners
# 1. Bokeh: https://www.kaggle.com/kanncaa1/interactive-bokeh-tutorial-part-1
# 1. Bokeh: https://www.kaggle.com/kanncaa1/interactive-bokeh-tutorial-part-2
# 1. Statistical Thinking
# 1. https://www.kaggle.com/kanncaa1/basic-statistic-tutorial-for-beginners
# 1. [Machine Learning](#1)
# 1. [Supervised Learning](#2)
# 1. [EDA(Exploratory Data Analysis)](#3)
# 1. [K-Nearest Neighbors (KNN)](#4)
# 1. [Regression](#5)
# 1. [Cross Validation (CV)](#6)
# 1. [ROC Curve](#7)
# 1. [Hyperparameter Tuning](#8)
# 1. [Pre-procesing Data](#9)
# 1. [Unsupervised Learning](#10)
# 1. [Kmeans Clustering](#11)
# 1. [Evaluation of Clustering](#12)
# 1. [Standardization](#13)
# 1. [Hierachy](#14)
# 1. [T - Distributed Stochastic Neighbor Embedding (T - SNE)](#15)
# 1. [Principle Component Analysis (PCA)](#16)
# 1. Deep Learning
# 1. https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners
# 1. Time Series Prediction
# 1. https://www.kaggle.com/kanncaa1/time-series-prediction-tutorial-with-eda
# 1. Deep Learning with Pytorch
# 1. Artificial Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
# 1. Convolutional Neural Network: https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
# 1. Recurrent Neural Network: https://www.kaggle.com/kanncaa1/recurrent-neural-network-with-pytorch
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.svm import SVC
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
# import warnings
import warnings
# ignore warnings
warnings.filterwarnings("ignore")
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
# read csv (comma separated value) into data
data = pd.read_csv("../input/column_2C_weka.csv")
print(plt.style.available) # look at available plot styles
plt.style.use("ggplot")
# to see features and target variable
data.head()
# Well know question is is there any NaN value and length of this data so lets look at info
data.info()
data.describe()
# pd.plotting.scatter_matrix:
# * green: *normal* and red: *abnormal*
# * c: color
# * figsize: figure size
# * diagonal: histohram of each features
# * alpha: opacity
# * s: size of marker
# * marker: marker type
color_list = ["red" if i == "Abnormal" else "green" for i in data.loc[:, "class"]]
pd.plotting.scatter_matrix(
data.loc[:, data.columns != "class"],
c=color_list,
figsize=[15, 15],
diagonal="hist",
alpha=0.5,
s=200,
marker="*",
edgecolor="black",
)
plt.show()
# Okay, as you understand in scatter matrix there are relations between each feature but how many *normal(green)* and *abnormal(red)* classes are there.
# * Searborn library has *countplot()* that counts number of classes
# * Also you can print it with *value_counts()* method
# This data looks like balanced. Actually there is no definiton or numeric value of balanced data but this data is balanced enough for us.
# Now lets learn first classification method KNN
sns.countplot(x="class", data=data)
data.loc[:, "class"].value_counts()
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
import random
newdt = data["class"]
newdt = newdt.replace("Abnormal", 0)
newdt = newdt.replace("Normal", 1)
features = [
"pelvic_incidence",
"pelvic_tilt numeric",
"lumbar_lordosis_angle",
"sacral_slope",
"pelvic_radius",
]
X = data.loc[:, data.columns != "class"]
y = newdt
print(X.head())
print(y.head())
# GA to SVM
# **Algorithm Steps**
# 1. Initialize parameters: a target function (a.k.a. fitness-function), a region of interest.
# 2. Generate a random population with n elements
# 3. Apply genetic algorithm operators to a population: crossover & mutation -> create a new population.
# 4. Create a temopary population: concatenate a population with a new population.
# 5. Calculate fitness-function for each element of a temporary population.
# 6. Sort our temporary population elements by their fitness function.
# 7. Select n best elements from a temporary population. If stop criterions are not over -> return to step 3. Else -> choose the 1st element of the population, it will be our solution.
## !!! "Precomputed kernel parameter: Precomputed matrix must be a square matrix."
# kernel = ["linear", "rbf", "sigmoid", "precomputed"]
kernel = ["linear", "poly", "rbf", "sigmoid"]
degrees = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# gamma = [1, 10, 100,1000]
# c = [1, 10, 100, 200, 300, 400,600,800, 1000]
gamma = [0.0001, 0.001, 0.01, 0.1, 1, 10]
c = [0.0001, 0.001, 0.01, 0.1, 1, 10, 20, 30]
random_state = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
# init_population Fonksiyonu random olarak bir popülasyon oluşturmaya yaramaktadır.
def init_population(size):
population = []
for i in range(size):
chromosome = np.ones(5)
chromosome = [
kernel[random.randint(0, len(kernel) - 1)],
degrees[random.randint(0, len(degrees) - 1)],
gamma[random.randint(0, len(gamma) - 1)],
c[random.randint(0, len(c) - 1)],
random_state[random.randint(0, len(random_state) - 1)],
]
population.append(chromosome)
print("[init_population]: population: ", population)
return population
# SVC Hesaplama metotu
def compute_acc(X, y, kernel, degres, gamma, c, random_state):
clf = SVC(
kernel=str(kernel),
degree=int(degres),
gamma=int(gamma),
C=float(c),
random_state=int(random_state),
)
x_train, x_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=666
)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
acc = accuracy_score(y_test, y_pred)
return acc
# Bütün sınıflandırıcılar için uygunluk değerini döndürmeye yarayan fonksiyondur.
def acc_score(population):
Score = pd.DataFrame({"Parameter": population})
acc = []
acc.append(compute_acc())
Score["Accuracy"] = acc
Score.sort_values(by="Accuracy", ascending=False, inplace=True)
Score.reset_index(drop=True, inplace=True)
return Score
# fitness_score En iyi ebeveynleri uygunluk skorlarıyla birlikte döndürmektedir.
def fitness_score(X, y, population):
scores = []
for chromosome in population:
result = compute_acc(
X,
y,
kernel=str(chromosome[0]),
degres=int(chromosome[1]),
gamma=int(chromosome[2]),
c=float(chromosome[3]),
random_state=int(chromosome[4]),
)
scores.append(result)
scores, population = np.array(scores), np.array(population, dtype=object)
inds = np.argsort(scores)
return list(scores[inds][::-1]), list(population[inds, :][::-1])
# selection fonksiyonu en iyi ebeveynlerin seçilmesini sağlamaktadır
def selection(pop_after_fit, n_parents):
population_nextgen = []
for i in range(n_parents):
population_nextgen.append(pop_after_fit[i])
return population_nextgen
# crossover birinci ebeveyn ile ikinci ebeveyn arasında gen çaprazlanmasını sağlamaktadır
def crossover(pop_after_sel):
pop_nextgen = pop_after_sel
for i in range(0, len(pop_after_sel), 2):
new_par = []
child_1, child_2 = pop_nextgen[i], pop_nextgen[i + 1]
new_par = np.concatenate(
(child_1[: len(child_1) // 2], child_2[len(child_1) // 2 :])
)
pop_nextgen.append(new_par)
return pop_nextgen
# Mutasyon amacıyla verileri karıştırma metotu
def shuffle(arr, mutation_rate):
temp = 0
arr = np.array(arr, dtype=object)
shapes = arr.shape
for i in range(mutation_rate):
rand_arr_1, rnd_arr_in_1 = (
random.randint(0, shapes[0]) - 1,
random.randint(0, shapes[1]) - 1,
)
rand_arr_2 = random.randint(0, shapes[0]) - 1
temp = arr[rand_arr_1][rnd_arr_in_1]
arr[rand_arr_1][rnd_arr_in_1] = arr[rand_arr_2][rnd_arr_in_1]
arr[rand_arr_2][rnd_arr_in_1] = temp
return arr
# mutation random bir şekilde seçilen bit ya da gen üzerinde mutasyon sağlamakta
# ve onları rastgele çevirmektedir.
def mutation(pop_after_cross, mutation_rate, n_feat):
pop_next = shuffle(arr=pop_after_cross, mutation_rate=mutation_rate)
return pop_next
# generations belirtilen nesil sayısı için yukarıdaki tüm fonksiyonları çalıştırmaktadır
# n_feat: Populasyondaki feature sayısı. Bu popülasyon için 5 feature vardır.
# n_parents: Ebeveyn seçimi için sayısı kadar iterasyon. Size ile aynı verilmesi uygun
# mutation_rate: mutasyon oranı
# Yeni generation için tekrar sayısı
def generations(X, y, size, n_feat, n_parents, mutation_rate, n_gen):
best_chromosom = []
best_score = []
population_nextgen = init_population(size)
for i in range(n_gen):
scores, pop_after_fit = fitness_score(X, y, population_nextgen)
print("[generations] Best score in generation", i + 1, ":", scores[:1]) # 2
pop_after_sel = selection(pop_after_fit, n_parents)
pop_after_cross = crossover(pop_after_sel)
population_nextgen = mutation(pop_after_cross, mutation_rate, n_feat)
best_chromosom.append(pop_after_fit[0])
best_score.append(scores[0])
print("[generations] Scores: ", scores, " pop_after_fit: ", pop_after_fit)
return best_chromosom, best_score
from sklearn.svm import SVC
chromo_df_pd, score_pd = generations(
X, y, size=5, n_feat=5, n_parents=5, mutation_rate=0.20, n_gen=2
)
| false | 0 | 4,074 | 0 | 4,383 | 4,074 |
||
129508596
|
<jupyter_start><jupyter_text>Linear Regression Data-set
### Context
The reason behind providing the data-set is that currently I'm doing my Master's in Computer Science, in my second semester I have chosen Data Science class, so in this class they are teaching me Linear Regression, so I decided to provide a set of x and y values, which not only helps me and also helps others.
### Content
The dataset contains x and y values:
x values are just iterating values.
y values depend on the equation y = mx+c.
### Inspiration
Everyone on this planet should be familiar (at least Computer Science students, etc.) about Linear Regression, so calculate the trend line, R^2, coefficient and intercept values.
Kaggle dataset identifier: linear-regression-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('linear-regression-dataset/Linear Regression - Sheet1.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 300 entries, 0 to 299
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 X 300 non-null int64
1 Y 300 non-null float64
dtypes: float64(1), int64(1)
memory usage: 4.8 KB
<jupyter_text>Examples:
{
"X": 1.0,
"Y": 3.888888889
}
{
"X": 2.0,
"Y": 4.555555556
}
{
"X": 3.0,
"Y": 5.222222222
}
{
"X": 4.0,
"Y": 5.888888889
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # 1. Load the data you receive into a Pandas DataFrame.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(
"/kaggle/input/linear-regression-dataset/Linear Regression - Sheet1.csv"
)
df
# # 2. Show the first five rows of the data set.
df.head()
# years_of_exp_val = df.y.values
# years_of_exp_val
# plt.scatter(df['X'],df['Y'], color='red')
# plt.xlabel('Square Teel')
# plt.ylabel('Taka')
# plt.title('X & Y')
# # 3. Show the description and the info of the data set
df.describe()
df.info()
# # 4. Ensure that any date columns have been cast into a datetime object in your DataFrame.
df.isnull().sum()
# df['x'] = pd.to_datetime(df['y'])
# # 5. Using a regression model, split your data into train and test portions.
# plt.scatter(df['X'], df['Y'], c = 'b')
# plt.title("X vs. Y")
# plt.xlabel("X")
# plt.ylabel("Y")
# plt.show()
from sklearn.model_selection import train_test_split
X = df.drop("X", axis=1) # Assuming 'target_column' is the column you want to predict
y = df["Y"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# # 6. Fit your training split to the regression model.
# from sklearn.linear_model import LinearRegression
# regression_model = LinearRegression()
# regression_model.fit(X_train, y_train)
from sklearn.linear_model import LinearRegression
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
# # 7. Show your regression model’s score.
score = regression_model.score(X_test, y_test)
print("Regression model score:", score)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/508/129508596.ipynb
|
linear-regression-dataset
|
tanuprabhu
|
[{"Id": 129508596, "ScriptId": 38508307, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15002318, "CreationDate": "05/14/2023 12:01:48", "VersionNumber": 2.0, "Title": "Linear Regressions", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 88.0, "LinesInsertedFromPrevious": 60.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 28.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185636676, "KernelVersionId": 129508596, "SourceDatasetVersionId": 508365}]
|
[{"Id": 508365, "DatasetId": 239789, "DatasourceVersionId": 524531, "CreatorUserId": 3200273, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "06/21/2019 05:57:12", "VersionNumber": 1.0, "Title": "Linear Regression Data-set", "Slug": "linear-regression-dataset", "Subtitle": "Data-set for practicing Linear Regression", "Description": "### Context\n\nThe reason behind providing the data-set is that currently I'm doing my Master's in Computer Science, in my second semester I have chosen Data Science class, so in this class they are teaching me Linear Regression, so I decided to provide a set of x and y values, which not only helps me and also helps others.\n\n### Content\n\nThe dataset contains x and y values: \nx values are just iterating values.\ny values depend on the equation y = mx+c.\n\n\n\n\n### Inspiration\n\nEveryone on this planet should be familiar (at least Computer Science students, etc.) about Linear Regression, so calculate the trend line, R^2, coefficient and intercept values.", "VersionNotes": "Initial release", "TotalCompressedBytes": 4995.0, "TotalUncompressedBytes": 4995.0}]
|
[{"Id": 239789, "CreatorUserId": 3200273, "OwnerUserId": 3200273.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 508365.0, "CurrentDatasourceVersionId": 524531.0, "ForumId": 250964, "Type": 2, "CreationDate": "06/21/2019 05:57:12", "LastActivityDate": "06/21/2019", "TotalViews": 107159, "TotalDownloads": 11703, "TotalVotes": 102, "TotalKernels": 77}]
|
[{"Id": 3200273, "UserName": "tanuprabhu", "DisplayName": "Tanu N Prabhu", "RegisterDate": "05/09/2019", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # 1. Load the data you receive into a Pandas DataFrame.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(
"/kaggle/input/linear-regression-dataset/Linear Regression - Sheet1.csv"
)
df
# # 2. Show the first five rows of the data set.
df.head()
# years_of_exp_val = df.y.values
# years_of_exp_val
# plt.scatter(df['X'],df['Y'], color='red')
# plt.xlabel('Square Teel')
# plt.ylabel('Taka')
# plt.title('X & Y')
# # 3. Show the description and the info of the data set
df.describe()
df.info()
# # 4. Ensure that any date columns have been cast into a datetime object in your DataFrame.
df.isnull().sum()
# df['x'] = pd.to_datetime(df['y'])
# # 5. Using a regression model, split your data into train and test portions.
# plt.scatter(df['X'], df['Y'], c = 'b')
# plt.title("X vs. Y")
# plt.xlabel("X")
# plt.ylabel("Y")
# plt.show()
from sklearn.model_selection import train_test_split
X = df.drop("X", axis=1) # Assuming 'target_column' is the column you want to predict
y = df["Y"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# # 6. Fit your training split to the regression model.
# from sklearn.linear_model import LinearRegression
# regression_model = LinearRegression()
# regression_model.fit(X_train, y_train)
from sklearn.linear_model import LinearRegression
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
# # 7. Show your regression model’s score.
score = regression_model.score(X_test, y_test)
print("Regression model score:", score)
|
[{"linear-regression-dataset/Linear Regression - Sheet1.csv": {"column_names": "[\"X\", \"Y\"]", "column_data_types": "{\"X\": \"int64\", \"Y\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 300 entries, 0 to 299\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 X 300 non-null int64 \n 1 Y 300 non-null float64\ndtypes: float64(1), int64(1)\nmemory usage: 4.8 KB\n", "summary": "{\"X\": {\"count\": 300.0, \"mean\": 150.5, \"std\": 86.74675786448736, \"min\": 1.0, \"25%\": 75.75, \"50%\": 150.5, \"75%\": 225.25, \"max\": 300.0}, \"Y\": {\"count\": 300.0, \"mean\": 102.21555556172666, \"std\": 57.842711037104436, \"min\": 1.888888889, \"25%\": 52.3888888925, \"50%\": 102.22222225, \"75%\": 152.055555575, \"max\": 201.8888889}}", "examples": "{\"X\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"Y\":{\"0\":3.888888889,\"1\":4.555555556,\"2\":5.222222222,\"3\":5.888888889}}"}}]
| true | 1 |
<start_data_description><data_path>linear-regression-dataset/Linear Regression - Sheet1.csv:
<column_names>
['X', 'Y']
<column_types>
{'X': 'int64', 'Y': 'float64'}
<dataframe_Summary>
{'X': {'count': 300.0, 'mean': 150.5, 'std': 86.74675786448736, 'min': 1.0, '25%': 75.75, '50%': 150.5, '75%': 225.25, 'max': 300.0}, 'Y': {'count': 300.0, 'mean': 102.21555556172666, 'std': 57.842711037104436, 'min': 1.888888889, '25%': 52.3888888925, '50%': 102.22222225, '75%': 152.055555575, 'max': 201.8888889}}
<dataframe_info>
RangeIndex: 300 entries, 0 to 299
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 X 300 non-null int64
1 Y 300 non-null float64
dtypes: float64(1), int64(1)
memory usage: 4.8 KB
<some_examples>
{'X': {'0': 1, '1': 2, '2': 3, '3': 4}, 'Y': {'0': 3.888888889, '1': 4.555555556, '2': 5.222222222, '3': 5.888888889}}
<end_description>
| 710 | 0 | 1,157 | 710 |
129508162
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
df = pd.read_csv("/kaggle/input/ab-nyc/AB_NYC_2019.csv")
df.head()
# **Data cleaning and preprocessing: Before analyzing the data**
# * it's important to clean and preprocess it to ensure that
# * it is accurate and useful for analysis.
# * Data cleaning involves identifying and fixing errors,
# * inconsistencies, and missing values in the dataset.
# * Preprocessing involves transforming the data into a format that is more suitable for analysis.
# * Removing duplicates: If there are duplicate rows in the dataset,
# * you may want to remove them to avoid skewing the analysis results.
# Removing the duplicates Values by using pandas function
df = df.drop_duplicates()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/508/129508162.ipynb
| null | null |
[{"Id": 129508162, "ScriptId": 38504639, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6082093, "CreationDate": "05/14/2023 11:56:50", "VersionNumber": 3.0, "Title": "AIR_BNB", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 39.0, "LinesInsertedFromPrevious": 26.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 13.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
df = pd.read_csv("/kaggle/input/ab-nyc/AB_NYC_2019.csv")
df.head()
# **Data cleaning and preprocessing: Before analyzing the data**
# * it's important to clean and preprocess it to ensure that
# * it is accurate and useful for analysis.
# * Data cleaning involves identifying and fixing errors,
# * inconsistencies, and missing values in the dataset.
# * Preprocessing involves transforming the data into a format that is more suitable for analysis.
# * Removing duplicates: If there are duplicate rows in the dataset,
# * you may want to remove them to avoid skewing the analysis results.
# Removing the duplicates Values by using pandas function
df = df.drop_duplicates()
| false | 0 | 250 | 0 | 250 | 250 |
||
129508154
|
#
#
# Machine Learning Project
# (R4CO3012P)
#
#
# E-commerce Shopper’s Behaviour Understanding:
#
#
# Understanding shopper’s purchasing pattern through Machine Learning
#
# #### **Description**
# Assume that you are working in a consultancy company and one of your client is running an e-commerce company. They are interested in understanding the customer behavior regarding the shopping. They have already collected the users' session data for a year. Each row belongs to a different user. The `Made_purchase` is an indicator that whether the user has made a purchase or not during that year. Your client is also interested in predicting that column using other attributes of the users. The client also informs you that the data is collected by non-experts. So, it might have some percentage of error in some columns.
# #### **Evaluation**
# The evaluation metric for this competition is [Mean F1-Score](https://en.wikipedia.org/wiki/F-score). The F1 score, commonly used in information retrieval, measures accuracy using the statistics precision. The F1 metric weights recall and precision equally, and a good retrieval algorithm will maximize both precision and recall simultaneously. Thus, moderately good performance on both will be favored over extremely good performance on one and poor performance on the other.
# #### **Submission Format**
# The file should contain a header and have the following format:
# $$
# \begin{array}{c|c}
# `id` & `Made\_Purchase` \\
# \hline
# 1 & False\\
# \end{array}
# $$
# #### **Submitted by**
# | Enrollment Number | Student Name |
# | :--: | :--------------: |
# | 221070801 | Humanshu Dilipkumar Gajbhiye |
# | 211071902 | Bhavika Milan Purav |
# | 201071044 | Anisha Pravin Jadhav |
# | 201071026 | Mayuri Premdas Pawar |
# ## 0.1 Standard Library Imports
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# ## 0.2 Importing Libraries
import xgboost as xgb, scipy as sp, matplotlib as mpl, seaborn as sns
from imblearn import under_sampling, over_sampling
# # **1. Get the Data**
# ## 1.1 Importing Dataset
test_data = pd.read_csv(
"https://raw.githubusercontent.com/HumanshuDG/CS2008P/main/test_data.csv"
)
train_data = pd.read_csv(
"https://raw.githubusercontent.com/HumanshuDG/CS2008P/main/train_data.csv"
)
sample_data = pd.read_csv(
"https://raw.githubusercontent.com/HumanshuDG/CS2008P/main/sample.csv"
)
train_data.shape, test_data.shape
# # **2. Preprocessing**
# ## 2.2 Data Imputation
train_data.isnull().sum()
# ## 2.2.1 Preprocessing Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import SimpleImputer, IterativeImputer, KNNImputer
from sklearn.preprocessing import (
StandardScaler,
MinMaxScaler,
OneHotEncoder,
QuantileTransformer,
OrdinalEncoder,
)
numerical_features = train_data.select_dtypes(
include=["int64", "float64"]
).columns.tolist()
categorical_features = train_data.select_dtypes(include=["object"]).columns.tolist()
num_estimators = [
("simple_imputer", SimpleImputer(missing_values=np.nan, strategy="most_frequent")),
("standard_scaler", StandardScaler()),
("quantile_transformer", QuantileTransformer()),
]
num_pipeline = Pipeline(steps=num_estimators)
cat_estimators = [("one_hot_encoder", OneHotEncoder())]
cat_pipeline = Pipeline(steps=cat_estimators)
preprocessing_pipe = ColumnTransformer(
[
("cat_pipeline", cat_pipeline, categorical_features),
("num_pipeline", num_pipeline, numerical_features),
]
)
from sklearn import set_config
# displays HTML representation in a jupyter context
set_config(display="diagram")
preprocessing_pipe
# ### 2.2.1 Data Imputation on `train_data`
train_data.replace("nan", np.nan, inplace=True)
num = train_data.select_dtypes(include=["int64", "float64"]).columns.tolist()
cat = train_data.select_dtypes(include=["object"]).columns.tolist()
# si_mean = SimpleImputer(missing_values = np.nan, strategy = 'mean')
# train_data[num] = si_mean.fit_transform(train_data[num])
# si_mode = SimpleImputer(missing_values = np.nan, strategy = 'most_frequent')
# train_data[cat] = si_mode.fit_transform(train_data[cat])
oe = OrdinalEncoder()
train_data[cat] = oe.fit_transform(train_data[cat])
knn = KNNImputer(n_neighbors=10)
train_data[num] = knn.fit_transform(train_data[num])
train_data[cat] = knn.fit_transform(train_data[cat])
# ohe = OneHotEncoder(handle_unknown = 'ignore')
# enc_train_data = ohe.fit_transform(train_data[cat])
# enc_train_data_df = pd.DataFrame(enc_train_data.toarray(), columns = oe.get_feature_names_out(cat))
# train_data = pd.concat([train_data, enc_train_data_df], axis = 1)
# train_data.drop(cat, axis = 1, inplace = True)
ss = StandardScaler()
train_data[num] = ss.fit_transform(train_data[num])
qt = QuantileTransformer(output_distribution="uniform")
train_data[num] = qt.fit_transform(train_data[num])
# train_data.head()
# ### 2.2.1 (Alternative) Data Imputation on `train_data`
# X = preprocessing_pipe.fit_transform(X)
# X = pd.DataFrame(X)
# X.head()
# ### 2.2.2 Data Imputation on `test_data`
test_data.replace("nan", np.nan, inplace=True)
num = test_data.select_dtypes(include=["int64", "float64"]).columns.tolist()
cat = test_data.select_dtypes(include=["object"]).columns.tolist()
# si_mean = SimpleImputer(missing_values = np.nan, strategy = 'mean')
# test_data[num] = si_mean.fit_transform(test_data[num])
# si_mode = SimpleImputer(missing_values = np.nan, strategy = 'most_frequent')
# test_data[cat] = si_mode.fit_transform(test_data[cat])
oe = OrdinalEncoder()
test_data[cat] = oe.fit_transform(test_data[cat])
knn = KNNImputer(n_neighbors=10)
test_data[num] = knn.fit_transform(test_data[num])
test_data[cat] = knn.fit_transform(test_data[cat])
# ohe = OneHotEncoder(handle_unknown = 'ignore')
# enc_test_data = ohe.fit_transform(test_data[cat])
# enc_test_data_df = pd.DataFrame(enc_test_data.toarray(), columns = oe.get_feature_names_out(cat))
# test_data = pd.concat([test_data, enc_test_data_df], axis = 1)
# test_data.drop(cat, axis = 1, inplace = True)
ss = StandardScaler()
test_data[num] = ss.fit_transform(test_data[num])
qt = QuantileTransformer(output_distribution="uniform")
test_data[num] = qt.fit_transform(test_data[num])
# test_data.head()
# ### 2.2.2 (Alternative) Data Imputation on `test_data`
# test_data = preprocessing_pipe.fit_transform(test_data)
# test_data = pd.DataFrame(test_data)
# test_data.head()
# ## Balancing using `imblearn`
# from imblearn.over_sampling import SMOTE
# X, y = SMOTE().fit_resample(X, y)
# X.shape, y.shape
# ## Outlier Adjustment
from scipy import stats
train_data = train_data[(np.abs(stats.zscore(train_data[num])) < 3).all(axis=1)]
# test_data = test_data[(np.abs(stats.zscore(test_data[num])) < 3).all(axis=1)]
train_data.shape, test_data.shape
# ## Dropping Duplicates
print(train_data.duplicated().sum())
train_data = train_data.drop_duplicates()
train_data.shape, test_data.shape
mpl.pyplot.figure(figsize=(40, 40))
sns.heatmap(train_data.corr(), annot=True, square=True, cmap="RdBu")
# ## 2.3 Splitting the Data for training
# ## 2.1 Separating features and labels
y, X = train_data.pop("Made_Purchase"), train_data
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=1127
)
X_train.describe().T
# # 3. Baseline Model
from sklearn.dummy import DummyClassifier
dummy_clf = DummyClassifier(strategy="most_frequent")
dummy_clf.fit(X, y)
DummyClassifier(strategy="most_frequent")
dummy_clf.predict(X)
# # 4. Candidate Algorithms
from sklearn.metrics import f1_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
best_accuracy = 0.0
best_classifier = 0
best_pipeline = ""
# ## 4.1 Logistic Regression
from sklearn.linear_model import LogisticRegression
pipeline_lr = Pipeline([("lr_classifier", LogisticRegression(max_iter=100000))])
# creating a pipeline object
pipe = Pipeline([("classifier", LogisticRegression())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [LogisticRegression()],
"classifier__max_iter": [10000, 100000],
"classifier__penalty": ["l1", "l2"],
"classifier__C": np.logspace(0, 4, 10),
"classifier__solver": ["newton-cg", "saga", "sag", "liblinear"],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# LogisticRegression(C=2.7825594022071245, max_iter=10000, penalty='l1',
# solver='saga')
# Accuracy: 0.6628222523744912
# F1: 0.28694404591104733
# LogisticRegression(C=2.7825594022071245, max_iter=10000, penalty='l1',
# solver='saga')
# Accuracy: 0.6441581519324745
# F1: 0.24075829383886257
# LogisticRegression(C=2.7825594022071245, max_iter=10000, penalty='l1',
# solver='saga')
# Accuracy: 0.6612846612846612
# F1: 0.4197233914612147
clf_lr = LogisticRegression(
C=2.7825594022071245, max_iter=10000, penalty="l1", solver="saga"
)
clf_lr.fit(X_train, y_train)
y_pred = clf_lr.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_lr,
"\nAccuracy:",
clf_lr.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_lr, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.2 Decision Tree Classifier
from sklearn.tree import DecisionTreeClassifier
pipeline_dt = Pipeline([("dt_classifier", DecisionTreeClassifier())])
# creating a pipeline object
pipe = Pipeline([("classifier", DecisionTreeClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [DecisionTreeClassifier()],
"classifier__max_depth": [1, 2, 5, 10, 100],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10)
# Accuracy: 0.6609336609336609
# F1: 0.4180722891566265
clf_dt = DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10)
clf_dt.fit(X_train, y_train)
y_pred = clf_dt.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_dt,
"\nAccuracy:",
clf_dt.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# DecisionTreeClassifier(max_depth=1)
# Accuracy: 0.6486006219458018
# F1: 0.3500410846343468
# DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10)
# Accuracy: 0.6609336609336609
# F1: 0.4180722891566265
score_ = cross_val_score(clf_dt, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# Cross Validation Score: 0.6619583968701385
# ## 4.3 Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
pipeline_rf = Pipeline(
[("rf_classifier", RandomForestClassifier(criterion="gini", n_estimators=100))]
)
# creating a pipeline object
pipe = Pipeline([("classifier", RandomForestClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [RandomForestClassifier()],
"classifier__n_estimators": [10, 100, 1000],
"classifier__min_samples_leaf": [2, 5, 10, 15, 100],
"classifier__max_leaf_nodes": [2, 5, 10, 20],
"classifier__max_depth": [None, 1, 2, 5, 8, 15, 25, 30],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# RandomForestClassifier(max_depth=30, max_leaf_nodes=20, min_samples_leaf=100,
# n_estimators=1000)
# Accuracy: 0.6784260515603799
# F1: 0.35422343324250677
clf_rf = RandomForestClassifier(
max_depth=30, max_leaf_nodes=20, min_samples_leaf=100, n_estimators=1000
)
clf_rf.fit(X_train, y_train)
y_pred = clf_rf.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_rf,
"\nAccuracy:",
clf_rf.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# RandomForestClassifier(max_depth=30, max_leaf_nodes=20, min_samples_leaf=100,
# n_estimators=1000)
# Accuracy: 0.6534873389604621
# F1: 0.28702010968921393
# RandomForestClassifier(max_depth=30, max_leaf_nodes=20, min_samples_leaf=100,
# n_estimators=1000)
# Accuracy: 0.6595296595296596
# F1: 0.3618421052631579
score_ = cross_val_score(clf_rf, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# Cross Validation Score: 0.6655923048464668
# Cross Validation Score: 0.666170694515041
# ## 4.4 ADAboost Classifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
pipeline_adaboost = Pipeline([("adaboost_classifier", AdaBoostClassifier())])
# creating a pipeline object
pipe = Pipeline([("classifier", AdaBoostClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [AdaBoostClassifier()],
# 'classifier__estimator': [GaussianNB(), DecisionTreeClassifier(max_depth = 1)],
"classifier__algorithm": ["SAMME", "SAMME.R"],
"classifier__n_estimators": [1000, 4000, 6000, 10000],
"classifier__learning_rate": [0.01, 0.05, 0.1, 0.5],
}
]
# dict_keys(['memory', 'steps', 'verbose', 'classifier',
# 'classifier__algorithm', 'classifier__base_estimator',
# 'classifier__learning_rate', 'classifier__n_estimators', 'classifier__random_state'])
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# AdaBoostClassifier(algorithm = 'SAMME',
# estimators = clf_dt,
# learning_rate = 0.05,
# n_estimators = 4000)
# Accuracy: 0.6811397557666214
# F1: 0.4035532994923857
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.05, n_estimators=6000)
# Accuracy: 0.6534873389604621
# F1: 0.32055749128919864
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.05, n_estimators=6000)
# Accuracy: 0.7192498621070049
# F1: 0.6563133018230924
clf_ada = AdaBoostClassifier(
algorithm="SAMME", base_estimator=clf_dt, learning_rate=0.2, n_estimators=1400
)
clf_ada.fit(X_train, y_train)
y_pred = clf_ada.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_ada,
"\nAccuracy:",
clf_ada.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.05, n_estimators=8000)
# Accuracy: 0.6534873389604621
# F1: 0.3193717277486911
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.005, n_estimators=8000)
# Accuracy: 0.6521545979564638
# F1: 0.30646589902568644
# AdaBoostClassifier(algorithm='SAMME', base_estimator=DecisionTreeClassifier(max_depth=1,
# max_leaf_nodes=10), learning_rate=0.05, n_estimators=8000)
# Accuracy: 0.6633906633906634
# F1: 0.3824855119124276
clf_ada = AdaBoostClassifier(
algorithm="SAMME", base_estimator=clf_dt, learning_rate=0.9, n_estimators=4000
)
clf_ada.fit(X_train, y_train)
y_pred = clf_ada.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_ada,
"\nAccuracy:",
clf_ada.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10),
# learning_rate=0.09, n_estimators=4000)
# Accuracy: 0.6633906633906634
# F1: 0.3832797427652734
score_ = cross_val_score(clf_ada, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# Cross Validation Score: 0.6665930447650759
# Cross Validation Score: 0.6644157694499638
# ## 4.5 VotingClassifier
from sklearn.ensemble import VotingClassifier
pipeline_vc = VotingClassifier(
[
(
"vc_classifier",
VotingClassifier(estimators=[("ada", clf_ada), ("gnb", GaussianNB())]),
)
]
)
# creating a pipeline object
pipe = Pipeline(
[
(
"classifier",
VotingClassifier(
estimators=[("ada", AdaBoostClassifier()), ("gnb", GaussianNB())]
),
)
]
)
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [
VotingClassifier(
estimators=[("ada", AdaBoostClassifier()), ("gnb", GaussianNB())]
)
],
"classifier__voting": ["hard", "soft"],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# VotingClassifier(estimators=[('ada', AdaBoostClassifier()),
# ('gnb', GaussianNB())])
# Accuracy 0.664179104477612
# F1 0.32653061224489793
clf_vc = VotingClassifier(estimators=[("ada", clf_ada), ("gnb", GaussianNB())])
clf_vc.fit(X_train, y_train)
y_pred = clf_vc.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_vc, "\nAccuracy", clf_vc.score(X_test, y_test), "\nF1", f1_score(y_test, y_pred)
)
score_ = cross_val_score(clf_vc, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.6 SVM Classifier
from sklearn.svm import SVC
pipeline_svm = Pipeline([("svm_classifier", SVC(gamma="auto"))])
# kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}
# creating a pipeline object
pipe = Pipeline([("classifier", SVC())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{"classifier": [SVC()], "classifier__kernel": ["linear", "poly", "rbf", "sigmoid"]}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# SVC()
# Accuracy 0.6648575305291723
# F1 0.31955922865013775
clf_svc = SVC()
clf_svc.fit(X_train, y_train)
y_pred = clf_svc.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_svc,
"\nAccuracy",
clf_svc.score(X_test, y_test),
"\nF1",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_svc, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.7 KNN Classifier
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
pipeline_knn = Pipeline(
[
(
"knn_classifier",
BaggingClassifier(
KNeighborsClassifier(), max_samples=0.5, max_features=0.5
),
)
]
)
# creating a pipeline object
pipe = Pipeline([("classifier", BaggingClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{"classifier": [BaggingClassifier()], "classifier__warm_start": [True, False]}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# BaggingClassifier(warm_start=True)
# Accuracy 0.49525101763907736
# F1 0.2927756653992396
clf_knn = BaggingClassifier()
BaggingClassifier(
KNeighborsClassifier(), warm_start=True, max_samples=5, max_features=5
)
clf_knn.fit(X_train, y_train)
y_pred = clf_knn.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_knn,
"\nAccuracy",
clf_knn.score(X_test, y_test),
"\nF1",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_knn, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.8 Multi-Layer Perceptron
from sklearn.neural_network import MLPClassifier
pipeline_mlp = Pipeline([("mlp_classifier", MLPClassifier())])
# creating a pipeline object
pipe = Pipeline([("classifier", MLPClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [MLPClassifier()],
"classifier__hidden_layer_sizes": [(100,), (400,), (600,)],
"classifier__activation": ["tanh", "relu"],
"classifier__solver": ["sgd", "adam"],
"classifier__learning_rate": ["constant", "invscaling", "adaptive"],
"classifier__max_iter": [1000, 10000],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# MLPClassifier(activation='tanh', hidden_layer_sizes=(400,),
# learning_rate='adaptive', max_iter=1000, solver='sgd')
# Accuracy: 0.666214382632293
# F1: 0.3031161473087819
clf_mlp = MLPClassifier(
hidden_layer_sizes=(40, 30, 20, 10),
max_iter=1000,
solver="sgd",
activation="tanh",
learning_rate="adaptive",
)
clf_mlp.fit(X_train, y_train)
y_pred = clf_mlp.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_mlp,
"\nAccuracy:",
clf_mlp.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_mlp, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.9 HistGradientBoostingClassifier
from sklearn.ensemble import HistGradientBoostingClassifier
pipeline_hgb = Pipeline([("hgb_classifier", HistGradientBoostingClassifier())])
# loss='log_loss', *, learning_rate = 0.01, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, interaction_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e-07, verbose=0, random_state=None, class_weight=None
# creating a pipeline object
pipe = Pipeline([("classifier", HistGradientBoostingClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [HistGradientBoostingClassifier()],
"classifier__loss": [
"log_loss",
"auto",
"binary_crossentropy",
"categorical_crossentropy",
],
"classifier__learning_rate": np.logspace(0, 4, 10),
"classifier__max_iter": [1000, 10000],
"classifier__max_depth": [None, 1, 2, 5, 8, 15, 25, 30],
"classifier__min_samples_leaf": [None, 2, 5, 10, 15, 100],
}
]
# loss='log_loss',
# learning_rate = 0.01,
# max_iter=100,
# max_leaf_nodes=31,
# max_depth=None,
# min_samples_leaf=20,
# l2_regularization=0.0,
# max_bins=255,
# categorical_features=None,
# monotonic_cst=None,
# interaction_cst=None,
# warm_start=False,
# early_stopping='auto',
# scoring='loss',
# validation_fraction=0.1,
# n_iter_no_change=10,
# tol=1e-07,
# verbose=0,
# class_weight=None
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# # **5. Training**
# ## 5.1 Consolidating Pipelines
# pipelines = [pipeline_lr, pipeline_dt, pipeline_rf, pipeline_adaboost, pipeline_mlp]
# pipes = {
# 0: 'Logistic Regression',
# 1: 'Decision Tree',
# 2: 'Random Forest',
# 3: 'ADAboost',
# 4: 'Multi-Layer Perceptron'
# }
# ## 5.2 Fitting Models
# for pipe in pipelines:
# pipe.fit(X_train, y_train)
# ## 5.3 Scoring of Models
# ### 5.3.1 Accuracy
# for _, classifier in enumerate(pipelines):
# print('{} test accuracy: {}'.format(pipes[_], classifier.score(X_test, y_test)))
# # Logistic Regression test accuracy: 0.6628222523744912
# # Decision Tree test accuracy: 0.4497964721845319
# # Random Forest test accuracy: 0.5210312075983717
# # ADAboost test accuracy: 0.6723202170963365
# # Multi-Layer Perceptron test accuracy: 0.6648575305291723
# ### 5.3.2 F1
# from sklearn.metrics import f1_score
# for _, classifier in enumerate(pipelines):
# y_pred = classifier.predict(X_test)
# y_pred = y_pred.astype(bool)
# print('{} f1-score: {}'.format(pipes[_], f1_score(y_test, y_pred)))
# # Logistic Regression f1-score: 0.28694404591104733
# # Decision Tree f1-score: 0.2453531598513011
# # Random Forest f1-score: 0.28771228771228774
# # ADAboost f1-score: 0.3893805309734514
# # **6. Hyperparameter Tuning**
# ## 6.1 Importing GridSearchCV
# from sklearn.model_selection import GridSearchCV
# ## 6.2 Creating a pipeline fot GridSearchCV
# # creating a pipeline object
# pipe = Pipeline([('classifier', RandomForestClassifier())])
# # creating a dictionary with candidate learning algorithms and their hyperparameters
# grid_parameters = [
# {
# 'classifier': [LogisticRegression()],
# 'classifier__max_iter': [10000, 100000],
# 'classifier__penalty': ['l1', 'l2'],
# 'classifier__C': np.logspace(0, 4, 10),
# 'classifier__solver': ['newton-cg', 'saga', 'sag', 'liblinear']
# },
# {
# 'classifier': [RandomForestClassifier()],
# 'classifier__n_estimators': [10, 100, 1000],
# 'classifier__min_samples_leaf': [2, 5, 10, 15, 100],
# 'classifier__max_leaf_nodes': [2, 5, 10, 20],
# 'classifier__max_depth': [None, 1, 2, 5, 8, 15, 25, 30]
# },
# {
# 'classifier': [AdaBoostClassifier()],
# 'classifier__n_estimators': [100, 1000],
# 'classifier__learning_rate': [0.001, 0.01],
# 'classifier__random_state': [1127]
# },
# {
# 'classifier': [VotingClassifier()],
# 'classifier__voting': ['hard', 'soft']
# },
# {
# 'classifier': [MLPClassifier()],
# 'classifier__hidden_layer_sizes': [(100, ), (1000,)],
# 'classifier__activation': ['identity', 'logistic', 'tanh', 'relu'],
# 'classifier__solver': ['lbfgs', 'sgd', 'adam'],
# 'classifier__learning_rate': ['constant', 'invscaling', 'adaptive'],
# 'classifier__max_iter': [200, 1000, 10000]
# },
# ]
# grid_search = GridSearchCV(pipe, grid_parameters, cv = 4, verbose = 0, n_jobs = -1)
# best_model = grid_search.fit(X_train, y_train)
# print(best_model.best_estimator_)
# print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# ## 6.3 Creating GridSearchCV
# grid_search = GridSearchCV(pipe, grid_parameters, cv = 4, verbose = 0, n_jobs = -1)
# best_model = grid_search.fit(X_train, y_train)
# ## 6.4 Fitting best parameters
# print(best_model.best_estimator_)
# print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# Pipeline(steps=[('classifier',
# RandomForestClassifier(max_depth=5, max_leaf_nodes=10,
# min_samples_leaf=15))])
# The mean accuracy of the model is: 0.6709633649932157
# Pipeline(steps=[('classifier',
# AdaBoostClassifier(learning_rate=0.01, n_estimators=1000,
# random_state=1127))])
# The mean accuracy of the model is: 0.6811397557666214
# ## 6.5 Calculating scores of best parameters
# ### 6.5.1 F1
# y_pred = best_model.predict(X_test)
# y_pred = y_pred.astype(bool)
# print('{} f1-score: {}'.format(pipes[_], f1_score(y_test, y_pred)))
# # Random Forest f1-score: 0.3081312410841655
# # ADAboost f1-score: 0.4035532994923857
# ### 6.5.2 Cross Validation
# from sklearn.model_selection import cross_val_score
# score_ = cross_val_score(best_model, X_train, y_train, cv = 3).mean()
# print('{} cross_validation-score: {}'.format(pipes[_], score_))
# # Random Forest cross_validation-score: 0.6551260431050521
# # Submitting as CSV file
sub = pd.DataFrame(clf_ada.predict(test_data), columns=["Made_Purchase"])
sub.index.name = "id"
sub.to_csv("submission.csv", encoding="UTF-8")
output = pd.read_csv("submission.csv")
# with open('/kaggle/working/submission.csv') as f:
# ff = f.readlines()
# print(*ff)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/508/129508154.ipynb
| null | null |
[{"Id": 129508154, "ScriptId": 38488252, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13165633, "CreationDate": "05/14/2023 11:56:44", "VersionNumber": 1.0, "Title": "Machine Learning Project", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 940.0, "LinesInsertedFromPrevious": 441.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 499.0, "LinesInsertedFromFork": 441.0, "LinesDeletedFromFork": 437.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 499.0, "TotalVotes": 0}]
| null | null | null | null |
#
#
# Machine Learning Project
# (R4CO3012P)
#
#
# E-commerce Shopper’s Behaviour Understanding:
#
#
# Understanding shopper’s purchasing pattern through Machine Learning
#
# #### **Description**
# Assume that you are working in a consultancy company and one of your client is running an e-commerce company. They are interested in understanding the customer behavior regarding the shopping. They have already collected the users' session data for a year. Each row belongs to a different user. The `Made_purchase` is an indicator that whether the user has made a purchase or not during that year. Your client is also interested in predicting that column using other attributes of the users. The client also informs you that the data is collected by non-experts. So, it might have some percentage of error in some columns.
# #### **Evaluation**
# The evaluation metric for this competition is [Mean F1-Score](https://en.wikipedia.org/wiki/F-score). The F1 score, commonly used in information retrieval, measures accuracy using the statistics precision. The F1 metric weights recall and precision equally, and a good retrieval algorithm will maximize both precision and recall simultaneously. Thus, moderately good performance on both will be favored over extremely good performance on one and poor performance on the other.
# #### **Submission Format**
# The file should contain a header and have the following format:
# $$
# \begin{array}{c|c}
# `id` & `Made\_Purchase` \\
# \hline
# 1 & False\\
# \end{array}
# $$
# #### **Submitted by**
# | Enrollment Number | Student Name |
# | :--: | :--------------: |
# | 221070801 | Humanshu Dilipkumar Gajbhiye |
# | 211071902 | Bhavika Milan Purav |
# | 201071044 | Anisha Pravin Jadhav |
# | 201071026 | Mayuri Premdas Pawar |
# ## 0.1 Standard Library Imports
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# ## 0.2 Importing Libraries
import xgboost as xgb, scipy as sp, matplotlib as mpl, seaborn as sns
from imblearn import under_sampling, over_sampling
# # **1. Get the Data**
# ## 1.1 Importing Dataset
test_data = pd.read_csv(
"https://raw.githubusercontent.com/HumanshuDG/CS2008P/main/test_data.csv"
)
train_data = pd.read_csv(
"https://raw.githubusercontent.com/HumanshuDG/CS2008P/main/train_data.csv"
)
sample_data = pd.read_csv(
"https://raw.githubusercontent.com/HumanshuDG/CS2008P/main/sample.csv"
)
train_data.shape, test_data.shape
# # **2. Preprocessing**
# ## 2.2 Data Imputation
train_data.isnull().sum()
# ## 2.2.1 Preprocessing Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import SimpleImputer, IterativeImputer, KNNImputer
from sklearn.preprocessing import (
StandardScaler,
MinMaxScaler,
OneHotEncoder,
QuantileTransformer,
OrdinalEncoder,
)
numerical_features = train_data.select_dtypes(
include=["int64", "float64"]
).columns.tolist()
categorical_features = train_data.select_dtypes(include=["object"]).columns.tolist()
num_estimators = [
("simple_imputer", SimpleImputer(missing_values=np.nan, strategy="most_frequent")),
("standard_scaler", StandardScaler()),
("quantile_transformer", QuantileTransformer()),
]
num_pipeline = Pipeline(steps=num_estimators)
cat_estimators = [("one_hot_encoder", OneHotEncoder())]
cat_pipeline = Pipeline(steps=cat_estimators)
preprocessing_pipe = ColumnTransformer(
[
("cat_pipeline", cat_pipeline, categorical_features),
("num_pipeline", num_pipeline, numerical_features),
]
)
from sklearn import set_config
# displays HTML representation in a jupyter context
set_config(display="diagram")
preprocessing_pipe
# ### 2.2.1 Data Imputation on `train_data`
train_data.replace("nan", np.nan, inplace=True)
num = train_data.select_dtypes(include=["int64", "float64"]).columns.tolist()
cat = train_data.select_dtypes(include=["object"]).columns.tolist()
# si_mean = SimpleImputer(missing_values = np.nan, strategy = 'mean')
# train_data[num] = si_mean.fit_transform(train_data[num])
# si_mode = SimpleImputer(missing_values = np.nan, strategy = 'most_frequent')
# train_data[cat] = si_mode.fit_transform(train_data[cat])
oe = OrdinalEncoder()
train_data[cat] = oe.fit_transform(train_data[cat])
knn = KNNImputer(n_neighbors=10)
train_data[num] = knn.fit_transform(train_data[num])
train_data[cat] = knn.fit_transform(train_data[cat])
# ohe = OneHotEncoder(handle_unknown = 'ignore')
# enc_train_data = ohe.fit_transform(train_data[cat])
# enc_train_data_df = pd.DataFrame(enc_train_data.toarray(), columns = oe.get_feature_names_out(cat))
# train_data = pd.concat([train_data, enc_train_data_df], axis = 1)
# train_data.drop(cat, axis = 1, inplace = True)
ss = StandardScaler()
train_data[num] = ss.fit_transform(train_data[num])
qt = QuantileTransformer(output_distribution="uniform")
train_data[num] = qt.fit_transform(train_data[num])
# train_data.head()
# ### 2.2.1 (Alternative) Data Imputation on `train_data`
# X = preprocessing_pipe.fit_transform(X)
# X = pd.DataFrame(X)
# X.head()
# ### 2.2.2 Data Imputation on `test_data`
test_data.replace("nan", np.nan, inplace=True)
num = test_data.select_dtypes(include=["int64", "float64"]).columns.tolist()
cat = test_data.select_dtypes(include=["object"]).columns.tolist()
# si_mean = SimpleImputer(missing_values = np.nan, strategy = 'mean')
# test_data[num] = si_mean.fit_transform(test_data[num])
# si_mode = SimpleImputer(missing_values = np.nan, strategy = 'most_frequent')
# test_data[cat] = si_mode.fit_transform(test_data[cat])
oe = OrdinalEncoder()
test_data[cat] = oe.fit_transform(test_data[cat])
knn = KNNImputer(n_neighbors=10)
test_data[num] = knn.fit_transform(test_data[num])
test_data[cat] = knn.fit_transform(test_data[cat])
# ohe = OneHotEncoder(handle_unknown = 'ignore')
# enc_test_data = ohe.fit_transform(test_data[cat])
# enc_test_data_df = pd.DataFrame(enc_test_data.toarray(), columns = oe.get_feature_names_out(cat))
# test_data = pd.concat([test_data, enc_test_data_df], axis = 1)
# test_data.drop(cat, axis = 1, inplace = True)
ss = StandardScaler()
test_data[num] = ss.fit_transform(test_data[num])
qt = QuantileTransformer(output_distribution="uniform")
test_data[num] = qt.fit_transform(test_data[num])
# test_data.head()
# ### 2.2.2 (Alternative) Data Imputation on `test_data`
# test_data = preprocessing_pipe.fit_transform(test_data)
# test_data = pd.DataFrame(test_data)
# test_data.head()
# ## Balancing using `imblearn`
# from imblearn.over_sampling import SMOTE
# X, y = SMOTE().fit_resample(X, y)
# X.shape, y.shape
# ## Outlier Adjustment
from scipy import stats
train_data = train_data[(np.abs(stats.zscore(train_data[num])) < 3).all(axis=1)]
# test_data = test_data[(np.abs(stats.zscore(test_data[num])) < 3).all(axis=1)]
train_data.shape, test_data.shape
# ## Dropping Duplicates
print(train_data.duplicated().sum())
train_data = train_data.drop_duplicates()
train_data.shape, test_data.shape
mpl.pyplot.figure(figsize=(40, 40))
sns.heatmap(train_data.corr(), annot=True, square=True, cmap="RdBu")
# ## 2.3 Splitting the Data for training
# ## 2.1 Separating features and labels
y, X = train_data.pop("Made_Purchase"), train_data
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=1127
)
X_train.describe().T
# # 3. Baseline Model
from sklearn.dummy import DummyClassifier
dummy_clf = DummyClassifier(strategy="most_frequent")
dummy_clf.fit(X, y)
DummyClassifier(strategy="most_frequent")
dummy_clf.predict(X)
# # 4. Candidate Algorithms
from sklearn.metrics import f1_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
best_accuracy = 0.0
best_classifier = 0
best_pipeline = ""
# ## 4.1 Logistic Regression
from sklearn.linear_model import LogisticRegression
pipeline_lr = Pipeline([("lr_classifier", LogisticRegression(max_iter=100000))])
# creating a pipeline object
pipe = Pipeline([("classifier", LogisticRegression())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [LogisticRegression()],
"classifier__max_iter": [10000, 100000],
"classifier__penalty": ["l1", "l2"],
"classifier__C": np.logspace(0, 4, 10),
"classifier__solver": ["newton-cg", "saga", "sag", "liblinear"],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# LogisticRegression(C=2.7825594022071245, max_iter=10000, penalty='l1',
# solver='saga')
# Accuracy: 0.6628222523744912
# F1: 0.28694404591104733
# LogisticRegression(C=2.7825594022071245, max_iter=10000, penalty='l1',
# solver='saga')
# Accuracy: 0.6441581519324745
# F1: 0.24075829383886257
# LogisticRegression(C=2.7825594022071245, max_iter=10000, penalty='l1',
# solver='saga')
# Accuracy: 0.6612846612846612
# F1: 0.4197233914612147
clf_lr = LogisticRegression(
C=2.7825594022071245, max_iter=10000, penalty="l1", solver="saga"
)
clf_lr.fit(X_train, y_train)
y_pred = clf_lr.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_lr,
"\nAccuracy:",
clf_lr.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_lr, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.2 Decision Tree Classifier
from sklearn.tree import DecisionTreeClassifier
pipeline_dt = Pipeline([("dt_classifier", DecisionTreeClassifier())])
# creating a pipeline object
pipe = Pipeline([("classifier", DecisionTreeClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [DecisionTreeClassifier()],
"classifier__max_depth": [1, 2, 5, 10, 100],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10)
# Accuracy: 0.6609336609336609
# F1: 0.4180722891566265
clf_dt = DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10)
clf_dt.fit(X_train, y_train)
y_pred = clf_dt.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_dt,
"\nAccuracy:",
clf_dt.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# DecisionTreeClassifier(max_depth=1)
# Accuracy: 0.6486006219458018
# F1: 0.3500410846343468
# DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10)
# Accuracy: 0.6609336609336609
# F1: 0.4180722891566265
score_ = cross_val_score(clf_dt, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# Cross Validation Score: 0.6619583968701385
# ## 4.3 Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
pipeline_rf = Pipeline(
[("rf_classifier", RandomForestClassifier(criterion="gini", n_estimators=100))]
)
# creating a pipeline object
pipe = Pipeline([("classifier", RandomForestClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [RandomForestClassifier()],
"classifier__n_estimators": [10, 100, 1000],
"classifier__min_samples_leaf": [2, 5, 10, 15, 100],
"classifier__max_leaf_nodes": [2, 5, 10, 20],
"classifier__max_depth": [None, 1, 2, 5, 8, 15, 25, 30],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# RandomForestClassifier(max_depth=30, max_leaf_nodes=20, min_samples_leaf=100,
# n_estimators=1000)
# Accuracy: 0.6784260515603799
# F1: 0.35422343324250677
clf_rf = RandomForestClassifier(
max_depth=30, max_leaf_nodes=20, min_samples_leaf=100, n_estimators=1000
)
clf_rf.fit(X_train, y_train)
y_pred = clf_rf.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_rf,
"\nAccuracy:",
clf_rf.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# RandomForestClassifier(max_depth=30, max_leaf_nodes=20, min_samples_leaf=100,
# n_estimators=1000)
# Accuracy: 0.6534873389604621
# F1: 0.28702010968921393
# RandomForestClassifier(max_depth=30, max_leaf_nodes=20, min_samples_leaf=100,
# n_estimators=1000)
# Accuracy: 0.6595296595296596
# F1: 0.3618421052631579
score_ = cross_val_score(clf_rf, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# Cross Validation Score: 0.6655923048464668
# Cross Validation Score: 0.666170694515041
# ## 4.4 ADAboost Classifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
pipeline_adaboost = Pipeline([("adaboost_classifier", AdaBoostClassifier())])
# creating a pipeline object
pipe = Pipeline([("classifier", AdaBoostClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [AdaBoostClassifier()],
# 'classifier__estimator': [GaussianNB(), DecisionTreeClassifier(max_depth = 1)],
"classifier__algorithm": ["SAMME", "SAMME.R"],
"classifier__n_estimators": [1000, 4000, 6000, 10000],
"classifier__learning_rate": [0.01, 0.05, 0.1, 0.5],
}
]
# dict_keys(['memory', 'steps', 'verbose', 'classifier',
# 'classifier__algorithm', 'classifier__base_estimator',
# 'classifier__learning_rate', 'classifier__n_estimators', 'classifier__random_state'])
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# AdaBoostClassifier(algorithm = 'SAMME',
# estimators = clf_dt,
# learning_rate = 0.05,
# n_estimators = 4000)
# Accuracy: 0.6811397557666214
# F1: 0.4035532994923857
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.05, n_estimators=6000)
# Accuracy: 0.6534873389604621
# F1: 0.32055749128919864
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.05, n_estimators=6000)
# Accuracy: 0.7192498621070049
# F1: 0.6563133018230924
clf_ada = AdaBoostClassifier(
algorithm="SAMME", base_estimator=clf_dt, learning_rate=0.2, n_estimators=1400
)
clf_ada.fit(X_train, y_train)
y_pred = clf_ada.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_ada,
"\nAccuracy:",
clf_ada.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.05, n_estimators=8000)
# Accuracy: 0.6534873389604621
# F1: 0.3193717277486911
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1),
# learning_rate=0.005, n_estimators=8000)
# Accuracy: 0.6521545979564638
# F1: 0.30646589902568644
# AdaBoostClassifier(algorithm='SAMME', base_estimator=DecisionTreeClassifier(max_depth=1,
# max_leaf_nodes=10), learning_rate=0.05, n_estimators=8000)
# Accuracy: 0.6633906633906634
# F1: 0.3824855119124276
clf_ada = AdaBoostClassifier(
algorithm="SAMME", base_estimator=clf_dt, learning_rate=0.9, n_estimators=4000
)
clf_ada.fit(X_train, y_train)
y_pred = clf_ada.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_ada,
"\nAccuracy:",
clf_ada.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
# AdaBoostClassifier(algorithm='SAMME',
# base_estimator=DecisionTreeClassifier(max_depth=1, max_leaf_nodes=10),
# learning_rate=0.09, n_estimators=4000)
# Accuracy: 0.6633906633906634
# F1: 0.3832797427652734
score_ = cross_val_score(clf_ada, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# Cross Validation Score: 0.6665930447650759
# Cross Validation Score: 0.6644157694499638
# ## 4.5 VotingClassifier
from sklearn.ensemble import VotingClassifier
pipeline_vc = VotingClassifier(
[
(
"vc_classifier",
VotingClassifier(estimators=[("ada", clf_ada), ("gnb", GaussianNB())]),
)
]
)
# creating a pipeline object
pipe = Pipeline(
[
(
"classifier",
VotingClassifier(
estimators=[("ada", AdaBoostClassifier()), ("gnb", GaussianNB())]
),
)
]
)
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [
VotingClassifier(
estimators=[("ada", AdaBoostClassifier()), ("gnb", GaussianNB())]
)
],
"classifier__voting": ["hard", "soft"],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# VotingClassifier(estimators=[('ada', AdaBoostClassifier()),
# ('gnb', GaussianNB())])
# Accuracy 0.664179104477612
# F1 0.32653061224489793
clf_vc = VotingClassifier(estimators=[("ada", clf_ada), ("gnb", GaussianNB())])
clf_vc.fit(X_train, y_train)
y_pred = clf_vc.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_vc, "\nAccuracy", clf_vc.score(X_test, y_test), "\nF1", f1_score(y_test, y_pred)
)
score_ = cross_val_score(clf_vc, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.6 SVM Classifier
from sklearn.svm import SVC
pipeline_svm = Pipeline([("svm_classifier", SVC(gamma="auto"))])
# kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}
# creating a pipeline object
pipe = Pipeline([("classifier", SVC())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{"classifier": [SVC()], "classifier__kernel": ["linear", "poly", "rbf", "sigmoid"]}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# SVC()
# Accuracy 0.6648575305291723
# F1 0.31955922865013775
clf_svc = SVC()
clf_svc.fit(X_train, y_train)
y_pred = clf_svc.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_svc,
"\nAccuracy",
clf_svc.score(X_test, y_test),
"\nF1",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_svc, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.7 KNN Classifier
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
pipeline_knn = Pipeline(
[
(
"knn_classifier",
BaggingClassifier(
KNeighborsClassifier(), max_samples=0.5, max_features=0.5
),
)
]
)
# creating a pipeline object
pipe = Pipeline([("classifier", BaggingClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{"classifier": [BaggingClassifier()], "classifier__warm_start": [True, False]}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# BaggingClassifier(warm_start=True)
# Accuracy 0.49525101763907736
# F1 0.2927756653992396
clf_knn = BaggingClassifier()
BaggingClassifier(
KNeighborsClassifier(), warm_start=True, max_samples=5, max_features=5
)
clf_knn.fit(X_train, y_train)
y_pred = clf_knn.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_knn,
"\nAccuracy",
clf_knn.score(X_test, y_test),
"\nF1",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_knn, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.8 Multi-Layer Perceptron
from sklearn.neural_network import MLPClassifier
pipeline_mlp = Pipeline([("mlp_classifier", MLPClassifier())])
# creating a pipeline object
pipe = Pipeline([("classifier", MLPClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [MLPClassifier()],
"classifier__hidden_layer_sizes": [(100,), (400,), (600,)],
"classifier__activation": ["tanh", "relu"],
"classifier__solver": ["sgd", "adam"],
"classifier__learning_rate": ["constant", "invscaling", "adaptive"],
"classifier__max_iter": [1000, 10000],
}
]
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# MLPClassifier(activation='tanh', hidden_layer_sizes=(400,),
# learning_rate='adaptive', max_iter=1000, solver='sgd')
# Accuracy: 0.666214382632293
# F1: 0.3031161473087819
clf_mlp = MLPClassifier(
hidden_layer_sizes=(40, 30, 20, 10),
max_iter=1000,
solver="sgd",
activation="tanh",
learning_rate="adaptive",
)
clf_mlp.fit(X_train, y_train)
y_pred = clf_mlp.predict(X_test)
y_pred = y_pred.astype(bool)
print(
clf_mlp,
"\nAccuracy:",
clf_mlp.score(X_test, y_test),
"\nF1:",
f1_score(y_test, y_pred),
)
score_ = cross_val_score(clf_mlp, X_train, y_train, cv=10).mean()
print("Cross Validation Score:", score_)
# ## 4.9 HistGradientBoostingClassifier
from sklearn.ensemble import HistGradientBoostingClassifier
pipeline_hgb = Pipeline([("hgb_classifier", HistGradientBoostingClassifier())])
# loss='log_loss', *, learning_rate = 0.01, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, interaction_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e-07, verbose=0, random_state=None, class_weight=None
# creating a pipeline object
pipe = Pipeline([("classifier", HistGradientBoostingClassifier())])
# creating a dictionary with candidate learning algorithms and their hyperparameters
grid_parameters = [
{
"classifier": [HistGradientBoostingClassifier()],
"classifier__loss": [
"log_loss",
"auto",
"binary_crossentropy",
"categorical_crossentropy",
],
"classifier__learning_rate": np.logspace(0, 4, 10),
"classifier__max_iter": [1000, 10000],
"classifier__max_depth": [None, 1, 2, 5, 8, 15, 25, 30],
"classifier__min_samples_leaf": [None, 2, 5, 10, 15, 100],
}
]
# loss='log_loss',
# learning_rate = 0.01,
# max_iter=100,
# max_leaf_nodes=31,
# max_depth=None,
# min_samples_leaf=20,
# l2_regularization=0.0,
# max_bins=255,
# categorical_features=None,
# monotonic_cst=None,
# interaction_cst=None,
# warm_start=False,
# early_stopping='auto',
# scoring='loss',
# validation_fraction=0.1,
# n_iter_no_change=10,
# tol=1e-07,
# verbose=0,
# class_weight=None
grid_search = GridSearchCV(pipe, grid_parameters, cv=4, verbose=0, n_jobs=-1)
best_model = grid_search.fit(X_train, y_train)
print(best_model.best_estimator_)
print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# # **5. Training**
# ## 5.1 Consolidating Pipelines
# pipelines = [pipeline_lr, pipeline_dt, pipeline_rf, pipeline_adaboost, pipeline_mlp]
# pipes = {
# 0: 'Logistic Regression',
# 1: 'Decision Tree',
# 2: 'Random Forest',
# 3: 'ADAboost',
# 4: 'Multi-Layer Perceptron'
# }
# ## 5.2 Fitting Models
# for pipe in pipelines:
# pipe.fit(X_train, y_train)
# ## 5.3 Scoring of Models
# ### 5.3.1 Accuracy
# for _, classifier in enumerate(pipelines):
# print('{} test accuracy: {}'.format(pipes[_], classifier.score(X_test, y_test)))
# # Logistic Regression test accuracy: 0.6628222523744912
# # Decision Tree test accuracy: 0.4497964721845319
# # Random Forest test accuracy: 0.5210312075983717
# # ADAboost test accuracy: 0.6723202170963365
# # Multi-Layer Perceptron test accuracy: 0.6648575305291723
# ### 5.3.2 F1
# from sklearn.metrics import f1_score
# for _, classifier in enumerate(pipelines):
# y_pred = classifier.predict(X_test)
# y_pred = y_pred.astype(bool)
# print('{} f1-score: {}'.format(pipes[_], f1_score(y_test, y_pred)))
# # Logistic Regression f1-score: 0.28694404591104733
# # Decision Tree f1-score: 0.2453531598513011
# # Random Forest f1-score: 0.28771228771228774
# # ADAboost f1-score: 0.3893805309734514
# # **6. Hyperparameter Tuning**
# ## 6.1 Importing GridSearchCV
# from sklearn.model_selection import GridSearchCV
# ## 6.2 Creating a pipeline fot GridSearchCV
# # creating a pipeline object
# pipe = Pipeline([('classifier', RandomForestClassifier())])
# # creating a dictionary with candidate learning algorithms and their hyperparameters
# grid_parameters = [
# {
# 'classifier': [LogisticRegression()],
# 'classifier__max_iter': [10000, 100000],
# 'classifier__penalty': ['l1', 'l2'],
# 'classifier__C': np.logspace(0, 4, 10),
# 'classifier__solver': ['newton-cg', 'saga', 'sag', 'liblinear']
# },
# {
# 'classifier': [RandomForestClassifier()],
# 'classifier__n_estimators': [10, 100, 1000],
# 'classifier__min_samples_leaf': [2, 5, 10, 15, 100],
# 'classifier__max_leaf_nodes': [2, 5, 10, 20],
# 'classifier__max_depth': [None, 1, 2, 5, 8, 15, 25, 30]
# },
# {
# 'classifier': [AdaBoostClassifier()],
# 'classifier__n_estimators': [100, 1000],
# 'classifier__learning_rate': [0.001, 0.01],
# 'classifier__random_state': [1127]
# },
# {
# 'classifier': [VotingClassifier()],
# 'classifier__voting': ['hard', 'soft']
# },
# {
# 'classifier': [MLPClassifier()],
# 'classifier__hidden_layer_sizes': [(100, ), (1000,)],
# 'classifier__activation': ['identity', 'logistic', 'tanh', 'relu'],
# 'classifier__solver': ['lbfgs', 'sgd', 'adam'],
# 'classifier__learning_rate': ['constant', 'invscaling', 'adaptive'],
# 'classifier__max_iter': [200, 1000, 10000]
# },
# ]
# grid_search = GridSearchCV(pipe, grid_parameters, cv = 4, verbose = 0, n_jobs = -1)
# best_model = grid_search.fit(X_train, y_train)
# print(best_model.best_estimator_)
# print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# ## 6.3 Creating GridSearchCV
# grid_search = GridSearchCV(pipe, grid_parameters, cv = 4, verbose = 0, n_jobs = -1)
# best_model = grid_search.fit(X_train, y_train)
# ## 6.4 Fitting best parameters
# print(best_model.best_estimator_)
# print("The mean accuracy of the model is: ", best_model.score(X_test, y_test))
# Pipeline(steps=[('classifier',
# RandomForestClassifier(max_depth=5, max_leaf_nodes=10,
# min_samples_leaf=15))])
# The mean accuracy of the model is: 0.6709633649932157
# Pipeline(steps=[('classifier',
# AdaBoostClassifier(learning_rate=0.01, n_estimators=1000,
# random_state=1127))])
# The mean accuracy of the model is: 0.6811397557666214
# ## 6.5 Calculating scores of best parameters
# ### 6.5.1 F1
# y_pred = best_model.predict(X_test)
# y_pred = y_pred.astype(bool)
# print('{} f1-score: {}'.format(pipes[_], f1_score(y_test, y_pred)))
# # Random Forest f1-score: 0.3081312410841655
# # ADAboost f1-score: 0.4035532994923857
# ### 6.5.2 Cross Validation
# from sklearn.model_selection import cross_val_score
# score_ = cross_val_score(best_model, X_train, y_train, cv = 3).mean()
# print('{} cross_validation-score: {}'.format(pipes[_], score_))
# # Random Forest cross_validation-score: 0.6551260431050521
# # Submitting as CSV file
sub = pd.DataFrame(clf_ada.predict(test_data), columns=["Made_Purchase"])
sub.index.name = "id"
sub.to_csv("submission.csv", encoding="UTF-8")
output = pd.read_csv("submission.csv")
# with open('/kaggle/working/submission.csv') as f:
# ff = f.readlines()
# print(*ff)
| false | 0 | 10,530 | 0 | 10,530 | 10,530 |
||
129828756
|
<jupyter_start><jupyter_text>Predicting Critical Heat Flux
### Context
This dataset was prepared for the journal article entitled "On the prediction of critical heat flux using a physics-informed machine learning-aided framework" (doi: 10.1016/j.applthermaleng.2019.114540). The dataset contains processed and compiled records of experimental critical heat flux and boundary conditions used for the work presented in the article.
Kaggle dataset identifier: predicting-heat-flux
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import torch
from tqdm.notebook import tqdm
import torch.nn.functional as F
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from torch.utils.data import Dataset, DataLoader
warnings.filterwarnings("ignore")
seed = 42
torch.manual_seed(seed)
np.random.seed(seed)
sns.set_style("darkgrid")
pd.set_option("mode.chained_assignment", None)
def get_dataframe(path):
df = pd.read_csv(path)
return df.set_index(df.iloc[:, 0])
data = get_dataframe("/kaggle/input/playground-series-s3e15/data.csv")
original = get_dataframe(
"/kaggle/input/predicting-heat-flux/Data_CHF_Zhao_2020_ATE.csv"
)
def summary(text, df):
print(f"{text} shape: {df.shape}")
summ = pd.DataFrame(df.dtypes, columns=["dtypes"])
summ["null"] = df.isnull().sum()
summ["unique"] = df.nunique()
summ["min"] = df.min()
summ["median"] = df.median()
summ["max"] = df.max()
summ["mean"] = df.mean()
summ["std"] = df.std()
summ["duplicate"] = df.duplicated().sum()
return summ
summary("data", data)
summary("original", original)
sns.histplot(data, x="x_e_out [-]", color="r")
sns.histplot(original, x="x_e_out [-]", color="b")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/828/129828756.ipynb
|
predicting-heat-flux
|
saurabhshahane
|
[{"Id": 129828756, "ScriptId": 38611606, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3402809, "CreationDate": "05/16/2023 19:17:37", "VersionNumber": 2.0, "Title": "\ud83d\udd25 Pytorch-PS3E15 \ud83d\udd25", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 50.0, "LinesInsertedFromPrevious": 49.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 1.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186207865, "KernelVersionId": 129828756, "SourceDatasetVersionId": 1921393}]
|
[{"Id": 1921393, "DatasetId": 1145869, "DatasourceVersionId": 1959907, "CreatorUserId": 2411256, "LicenseName": "Attribution 4.0 International (CC BY 4.0)", "CreationDate": "02/08/2021 11:44:07", "VersionNumber": 1.0, "Title": "Predicting Critical Heat Flux", "Slug": "predicting-heat-flux", "Subtitle": "prediction of critical heat flux using Machine Learning", "Description": "### Context\n\nThis dataset was prepared for the journal article entitled \"On the prediction of critical heat flux using a physics-informed machine learning-aided framework\" (doi: 10.1016/j.applthermaleng.2019.114540). The dataset contains processed and compiled records of experimental critical heat flux and boundary conditions used for the work presented in the article. \n\n### Acknowledgements\n\nZhao, Xingang (2020), \u201cData for: On the prediction of critical heat flux using a physics-informed machine learning-aided framework\u201d, Mendeley Data, V1, doi: 10.17632/5p5h37tyv7.1", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1145869, "CreatorUserId": 2411256, "OwnerUserId": 2411256.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1921393.0, "CurrentDatasourceVersionId": 1959907.0, "ForumId": 1163376, "Type": 2, "CreationDate": "02/08/2021 11:44:07", "LastActivityDate": "02/08/2021", "TotalViews": 6889, "TotalDownloads": 589, "TotalVotes": 42, "TotalKernels": 78}]
|
[{"Id": 2411256, "UserName": "saurabhshahane", "DisplayName": "Saurabh Shahane", "RegisterDate": "10/26/2018", "PerformanceTier": 4}]
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import torch
from tqdm.notebook import tqdm
import torch.nn.functional as F
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from torch.utils.data import Dataset, DataLoader
warnings.filterwarnings("ignore")
seed = 42
torch.manual_seed(seed)
np.random.seed(seed)
sns.set_style("darkgrid")
pd.set_option("mode.chained_assignment", None)
def get_dataframe(path):
df = pd.read_csv(path)
return df.set_index(df.iloc[:, 0])
data = get_dataframe("/kaggle/input/playground-series-s3e15/data.csv")
original = get_dataframe(
"/kaggle/input/predicting-heat-flux/Data_CHF_Zhao_2020_ATE.csv"
)
def summary(text, df):
print(f"{text} shape: {df.shape}")
summ = pd.DataFrame(df.dtypes, columns=["dtypes"])
summ["null"] = df.isnull().sum()
summ["unique"] = df.nunique()
summ["min"] = df.min()
summ["median"] = df.median()
summ["max"] = df.max()
summ["mean"] = df.mean()
summ["std"] = df.std()
summ["duplicate"] = df.duplicated().sum()
return summ
summary("data", data)
summary("original", original)
sns.histplot(data, x="x_e_out [-]", color="r")
sns.histplot(original, x="x_e_out [-]", color="b")
| false | 0 | 431 | 0 | 548 | 431 |
||
129828824
|
#
# ---
# # **Módulo** | Análise de Dados: COVID-19 Dashboard
# Caderno de **Exercícios**
# Professor [André Perez](https://www.linkedin.com/in/andremarcosperez/)
# ---
# # **Tópicos**
# Introdução;
# Análise Exploratória de Dados;
# Visualização Interativa de Dados;
# Storytelling.
# ---
# ## 1\. Introdução
# ### **1.1. TLDR**
# - **Dashboard**:
# - Google Data Studio (https://lookerstudio.google.com/s/hs31wGpFpX0).
# - **Processamento**:
# - Kaggle Notebook (`link`).
# - **Fontes**:
# - Casos pela universidade John Hopkins ([link](https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports));
# - Vacinação pela universidade de Oxford ([link](https://covid.ourworldindata.org/data/owid-covid-data.csv)).
# ### 1.2\. Contexto
# Atualmente, a Covid19 não representa uma ameaça de porte para a humanidade, pois com as medidas preventivas e vacinação, a severidade dos seus efeitos foram mitigados. O sucesso destes esforços pode ser comprovado pelo fato de as medidas preventivas já não estarem mais sendo utilizadas.
# Entretanto, os dados acumulados desta pandemia, podem ser utilizados de forma a compreender em detalhes sua dinâmica de propagação, eficiência da vacinação e outros insights que podem ser buscados através da comparação dos efeitos em regiões que adotaram medidas diferenciadas. Neste trabalho, iremos explorar os dados referentes ao Brasil de 2021 para buscar maior conhecimento deste evento histórico.
# Os dados sobre **casos da COVID-19** são compilados pelo centro de ciência de sistemas e engenharia da universidade americana **John Hopkins** ([link](https://www.jhu.edu)). Os dados são atualizados diariamente deste janeiro de 2020 com uma granularidade temporal de dias e geográfica de regiões de países (estados, condados, etc.). O website do projeto pode ser acessado neste [link](https://systems.jhu.edu/research/public-health/ncov/) enquanto os dados, neste [link](https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports). Abaixo estão descritos os dados derivados do seu processamento.
# - date: data de referência;
# - state: estado;
# - country: país;
# - population: população estimada;
# - confirmed: número acumulado de infectados;
# - confirmed_1d: número diário de infectados;
# - confirmed_moving_avg_7d: média móvel de 7 dias do número diário de infectados;
# - confirmed_moving_avg_7d_rate_14d: média móvel de 7 dias dividido pela média móvel de 7 dias de 14 dias atrás;
# - deaths: número acumulado de mortos;
# - deaths_1d: número diário de mortos;
# - deaths_moving_avg_7d: média móvel de 7 dias do número diário de mortos;
# - deaths_moving_avg_7d: média móvel de 7 dias dividido pela média móvel de 7 dias de 14 dias atrás;
# - month: mês de referência;
# - year: ano de referência.
# Os dados sobre vacinação da COVID-19 são compilados pelo projeto Nosso Mundo em Dados (Our World in Data ou OWID) da universidade britânica de Oxford (link). Os dados são atualizados diariamente deste janeiro de 2020 com uma granularidade temporal de dias e geográfica de países. O website do projeto pode ser acessado neste link enquanto os dados, neste link. Abaixo estão descritos os dados derivados do seu processamento.
# - **date**: data de referência;
# - **country**: país;
# - **population**: população estimada;
# - **total**: número acumulado de doses administradas;
# - **one_shot**: número acumulado de pessoas com uma dose;
# - **one_shot_perc**: número acumulado relativo de pessoas com uma dose;
# - **two_shots**: número acumulado de pessoas com duas doses;
# - **two_shot_perc**: número acumulado relativo de pessoas com duas doses;
# - **three_shots**: número acumulado de pessoas com três doses;
# - **three_shot_perc**: número acumulado relativo de pessoas com três doses;
# - **month**: mês de referência;
# - **year**: ano de referência.
# ### 1.3\. Pacotes e bibliotecas
import math
from typing import Iterator
from datetime import datetime, timedelta
import numpy as np
import pandas as pd
# ## 2\. Análise Exploratória Dados - Brasil ano 2021
# ### 2.1 - Quantidade de Casos.
# * Item da lista
# * Item da lista
#
def date_range(start_date: datetime, end_date: datetime) -> Iterator[datetime]:
date_range_days: int = (end_date - start_date).days
for lag in range(date_range_days):
yield start_date + timedelta(lag)
start_date = datetime(2021, 1, 1)
end_date = datetime(2021, 12, 31)
cases = None
cases_is_empty = True
for date in date_range(start_date=start_date, end_date=end_date):
date_str = date.strftime("%m-%d-%Y")
data_source_url = f"https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{date_str}.csv"
case = pd.read_csv(data_source_url, sep=",")
case = case.drop(
[
"FIPS",
"Admin2",
"Last_Update",
"Lat",
"Long_",
"Recovered",
"Active",
"Combined_Key",
"Case_Fatality_Ratio",
],
axis=1,
)
case = case.query('Country_Region == "Brazil"').reset_index(drop=True)
case["Date"] = pd.to_datetime(date.strftime("%Y-%m-%d"))
if cases_is_empty:
cases = case
cases_is_empty = False
else:
cases = cases.append(case, ignore_index=True)
cases.query('Province_State == "Sao Paulo"').head(2)
cases.head(5)
cases.shape
cases.info()
# ### 2.2 - Vacinação.
#
vaccines = pd.read_csv(
"https://covid.ourworldindata.org/data/owid-covid-data.csv",
sep=",",
parse_dates=[3],
infer_datetime_format=True,
)
vaccines.head(2)
# Vamos selecionar as colunas de interesse e as linhas referentes ao Brasil.
vaccines = vaccines.query('location == "Brazil"').reset_index(drop=True)
vaccines = vaccines[
[
"location",
"population",
"total_vaccinations",
"people_vaccinated",
"people_fully_vaccinated",
"total_boosters",
"date",
]
]
vaccines.head(2)
vaccines.shape
vaccines.info()
# ## 3\. Transformação Dados
# ### 3.1\. Quantidade de Casos
# - renomear colunas,
# - padronizar nomenclatura dos estados,
# - chaves temporais,
# - estimar população estados,
# - acrescentar colunas para médias móveis 7d e 14d,
# - acrescentar coluna para tendência,
#
cases = cases.rename(columns={"Province_State": "state", "Country_Region": "country"})
for col in cases.columns:
cases = cases.rename(columns={col: col.lower()})
states_map = {
"Amapa": "Amapá",
"Ceara": "Ceará",
"Espirito Santo": "Espírito Santo",
"Goias": "Goiás",
"Para": "Pará",
"Paraiba": "Paraíba",
"Parana": "Paraná",
"Piaui": "Piauí",
"Rondonia": "Rondônia",
"Sao Paulo": "São Paulo",
}
cases["state"] = cases["state"].apply(
lambda state: states_map.get(state) if state in states_map.keys() else state
)
cases["month"] = cases["date"].apply(lambda date: date.strftime("%Y-%m"))
cases["year"] = cases["date"].apply(lambda date: date.strftime("%Y"))
cases["population"] = round(100000 * (cases["confirmed"] / cases["incident_rate"]))
cases = cases.drop("incident_rate", axis=1)
cases_ = None
cases_is_empty = True
def get_trend(rate: float) -> str:
if np.isnan(rate):
return np.NaN
if rate < 0.75:
status = "downward"
elif rate > 1.15:
status = "upward"
else:
status = "stable"
return status
for state in cases["state"].drop_duplicates():
cases_per_state = cases.query(f'state == "{state}"').reset_index(drop=True)
cases_per_state = cases_per_state.sort_values(by=["date"])
cases_per_state["confirmed_1d"] = cases_per_state["confirmed"].diff(periods=1)
cases_per_state["confirmed_moving_avg_7d"] = np.ceil(
cases_per_state["confirmed_1d"].rolling(window=7).mean()
)
cases_per_state["confirmed_moving_avg_7d_rate_14d"] = cases_per_state[
"confirmed_moving_avg_7d"
] / cases_per_state["confirmed_moving_avg_7d"].shift(periods=14)
cases_per_state["confirmed_trend"] = cases_per_state[
"confirmed_moving_avg_7d_rate_14d"
].apply(get_trend)
cases_per_state["deaths_1d"] = cases_per_state["deaths"].diff(periods=1)
cases_per_state["deaths_moving_avg_7d"] = np.ceil(
cases_per_state["deaths_1d"].rolling(window=7).mean()
)
cases_per_state["deaths_moving_avg_7d_rate_14d"] = cases_per_state[
"deaths_moving_avg_7d"
] / cases_per_state["deaths_moving_avg_7d"].shift(periods=14)
cases_per_state["deaths_trend"] = cases_per_state[
"deaths_moving_avg_7d_rate_14d"
].apply(get_trend)
if cases_is_empty:
cases_ = cases_per_state
cases_is_empty = False
else:
cases_ = cases_.append(cases_per_state, ignore_index=True)
cases = cases_
cases_ = None
cases["population"] = cases["population"].astype("Int64")
cases["confirmed_1d"] = cases["confirmed_1d"].astype("Int64")
cases["confirmed_moving_avg_7d"] = cases["confirmed_moving_avg_7d"].astype("Int64")
cases["deaths_1d"] = cases["deaths_1d"].astype("Int64")
cases["deaths_moving_avg_7d"] = cases["deaths_moving_avg_7d"].astype("Int64")
cases = cases[
[
"date",
"country",
"state",
"population",
"confirmed",
"confirmed_1d",
"confirmed_moving_avg_7d",
"confirmed_moving_avg_7d_rate_14d",
"confirmed_trend",
"deaths",
"deaths_1d",
"deaths_moving_avg_7d",
"deaths_moving_avg_7d_rate_14d",
"deaths_trend",
"month",
"year",
]
]
cases.head(n=2)
# ### 3.2\. Vacinação
# - preenchimendo dados faltantes,
# - seleção datas,
# - renomear colunas,
# - chaves temporais,
# - valores relativos,
# - consistência tipos dados,
# - reorganização ordem colunas,
vaccines = vaccines.fillna(method="ffill")
vaccines = vaccines[
(vaccines["date"] >= "2021-01-01") & (vaccines["date"] <= "2021-12-31")
].reset_index(drop=True)
vaccines = vaccines.rename(
columns={
"location": "country",
"total_vaccinations": "total",
"people_vaccinated": "one_shot",
"people_fully_vaccinated": "two_shots",
"total_boosters": "three_shots",
}
)
vaccines["month"] = vaccines["date"].apply(lambda date: date.strftime("%Y-%m"))
vaccines["year"] = vaccines["date"].apply(lambda date: date.strftime("%Y"))
vaccines["one_shot_perc"] = round(vaccines["one_shot"] / vaccines["population"], 4)
vaccines["two_shots_perc"] = round(vaccines["two_shots"] / vaccines["population"], 4)
vaccines["three_shots_perc"] = round(
vaccines["three_shots"] / vaccines["population"], 4
)
vaccines["population"] = vaccines["population"].astype("Int64")
vaccines["total"] = vaccines["total"].astype("Int64")
vaccines["one_shot"] = vaccines["one_shot"].astype("Int64")
vaccines["two_shots"] = vaccines["two_shots"].astype("Int64")
vaccines["three_shots"] = vaccines["three_shots"].astype("Int64")
vaccines = vaccines[
[
"date",
"country",
"population",
"total",
"one_shot",
"one_shot_perc",
"two_shots",
"two_shots_perc",
"three_shots",
"three_shots_perc",
"month",
"year",
]
]
# ## 4\. Carregamento
cases.to_csv("./covid-cases.csv", sep=",", index=False)
vaccines.to_csv("./covid-vaccines.csv", sep=",", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/828/129828824.ipynb
| null | null |
[{"Id": 129828824, "ScriptId": 38612705, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13030688, "CreationDate": "05/16/2023 19:18:24", "VersionNumber": 1.0, "Title": "an\u00e1lise_covid_2021", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 286.0, "LinesInsertedFromPrevious": 286.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
| null | null | null | null |
#
# ---
# # **Módulo** | Análise de Dados: COVID-19 Dashboard
# Caderno de **Exercícios**
# Professor [André Perez](https://www.linkedin.com/in/andremarcosperez/)
# ---
# # **Tópicos**
# Introdução;
# Análise Exploratória de Dados;
# Visualização Interativa de Dados;
# Storytelling.
# ---
# ## 1\. Introdução
# ### **1.1. TLDR**
# - **Dashboard**:
# - Google Data Studio (https://lookerstudio.google.com/s/hs31wGpFpX0).
# - **Processamento**:
# - Kaggle Notebook (`link`).
# - **Fontes**:
# - Casos pela universidade John Hopkins ([link](https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports));
# - Vacinação pela universidade de Oxford ([link](https://covid.ourworldindata.org/data/owid-covid-data.csv)).
# ### 1.2\. Contexto
# Atualmente, a Covid19 não representa uma ameaça de porte para a humanidade, pois com as medidas preventivas e vacinação, a severidade dos seus efeitos foram mitigados. O sucesso destes esforços pode ser comprovado pelo fato de as medidas preventivas já não estarem mais sendo utilizadas.
# Entretanto, os dados acumulados desta pandemia, podem ser utilizados de forma a compreender em detalhes sua dinâmica de propagação, eficiência da vacinação e outros insights que podem ser buscados através da comparação dos efeitos em regiões que adotaram medidas diferenciadas. Neste trabalho, iremos explorar os dados referentes ao Brasil de 2021 para buscar maior conhecimento deste evento histórico.
# Os dados sobre **casos da COVID-19** são compilados pelo centro de ciência de sistemas e engenharia da universidade americana **John Hopkins** ([link](https://www.jhu.edu)). Os dados são atualizados diariamente deste janeiro de 2020 com uma granularidade temporal de dias e geográfica de regiões de países (estados, condados, etc.). O website do projeto pode ser acessado neste [link](https://systems.jhu.edu/research/public-health/ncov/) enquanto os dados, neste [link](https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports). Abaixo estão descritos os dados derivados do seu processamento.
# - date: data de referência;
# - state: estado;
# - country: país;
# - population: população estimada;
# - confirmed: número acumulado de infectados;
# - confirmed_1d: número diário de infectados;
# - confirmed_moving_avg_7d: média móvel de 7 dias do número diário de infectados;
# - confirmed_moving_avg_7d_rate_14d: média móvel de 7 dias dividido pela média móvel de 7 dias de 14 dias atrás;
# - deaths: número acumulado de mortos;
# - deaths_1d: número diário de mortos;
# - deaths_moving_avg_7d: média móvel de 7 dias do número diário de mortos;
# - deaths_moving_avg_7d: média móvel de 7 dias dividido pela média móvel de 7 dias de 14 dias atrás;
# - month: mês de referência;
# - year: ano de referência.
# Os dados sobre vacinação da COVID-19 são compilados pelo projeto Nosso Mundo em Dados (Our World in Data ou OWID) da universidade britânica de Oxford (link). Os dados são atualizados diariamente deste janeiro de 2020 com uma granularidade temporal de dias e geográfica de países. O website do projeto pode ser acessado neste link enquanto os dados, neste link. Abaixo estão descritos os dados derivados do seu processamento.
# - **date**: data de referência;
# - **country**: país;
# - **population**: população estimada;
# - **total**: número acumulado de doses administradas;
# - **one_shot**: número acumulado de pessoas com uma dose;
# - **one_shot_perc**: número acumulado relativo de pessoas com uma dose;
# - **two_shots**: número acumulado de pessoas com duas doses;
# - **two_shot_perc**: número acumulado relativo de pessoas com duas doses;
# - **three_shots**: número acumulado de pessoas com três doses;
# - **three_shot_perc**: número acumulado relativo de pessoas com três doses;
# - **month**: mês de referência;
# - **year**: ano de referência.
# ### 1.3\. Pacotes e bibliotecas
import math
from typing import Iterator
from datetime import datetime, timedelta
import numpy as np
import pandas as pd
# ## 2\. Análise Exploratória Dados - Brasil ano 2021
# ### 2.1 - Quantidade de Casos.
# * Item da lista
# * Item da lista
#
def date_range(start_date: datetime, end_date: datetime) -> Iterator[datetime]:
date_range_days: int = (end_date - start_date).days
for lag in range(date_range_days):
yield start_date + timedelta(lag)
start_date = datetime(2021, 1, 1)
end_date = datetime(2021, 12, 31)
cases = None
cases_is_empty = True
for date in date_range(start_date=start_date, end_date=end_date):
date_str = date.strftime("%m-%d-%Y")
data_source_url = f"https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{date_str}.csv"
case = pd.read_csv(data_source_url, sep=",")
case = case.drop(
[
"FIPS",
"Admin2",
"Last_Update",
"Lat",
"Long_",
"Recovered",
"Active",
"Combined_Key",
"Case_Fatality_Ratio",
],
axis=1,
)
case = case.query('Country_Region == "Brazil"').reset_index(drop=True)
case["Date"] = pd.to_datetime(date.strftime("%Y-%m-%d"))
if cases_is_empty:
cases = case
cases_is_empty = False
else:
cases = cases.append(case, ignore_index=True)
cases.query('Province_State == "Sao Paulo"').head(2)
cases.head(5)
cases.shape
cases.info()
# ### 2.2 - Vacinação.
#
vaccines = pd.read_csv(
"https://covid.ourworldindata.org/data/owid-covid-data.csv",
sep=",",
parse_dates=[3],
infer_datetime_format=True,
)
vaccines.head(2)
# Vamos selecionar as colunas de interesse e as linhas referentes ao Brasil.
vaccines = vaccines.query('location == "Brazil"').reset_index(drop=True)
vaccines = vaccines[
[
"location",
"population",
"total_vaccinations",
"people_vaccinated",
"people_fully_vaccinated",
"total_boosters",
"date",
]
]
vaccines.head(2)
vaccines.shape
vaccines.info()
# ## 3\. Transformação Dados
# ### 3.1\. Quantidade de Casos
# - renomear colunas,
# - padronizar nomenclatura dos estados,
# - chaves temporais,
# - estimar população estados,
# - acrescentar colunas para médias móveis 7d e 14d,
# - acrescentar coluna para tendência,
#
cases = cases.rename(columns={"Province_State": "state", "Country_Region": "country"})
for col in cases.columns:
cases = cases.rename(columns={col: col.lower()})
states_map = {
"Amapa": "Amapá",
"Ceara": "Ceará",
"Espirito Santo": "Espírito Santo",
"Goias": "Goiás",
"Para": "Pará",
"Paraiba": "Paraíba",
"Parana": "Paraná",
"Piaui": "Piauí",
"Rondonia": "Rondônia",
"Sao Paulo": "São Paulo",
}
cases["state"] = cases["state"].apply(
lambda state: states_map.get(state) if state in states_map.keys() else state
)
cases["month"] = cases["date"].apply(lambda date: date.strftime("%Y-%m"))
cases["year"] = cases["date"].apply(lambda date: date.strftime("%Y"))
cases["population"] = round(100000 * (cases["confirmed"] / cases["incident_rate"]))
cases = cases.drop("incident_rate", axis=1)
cases_ = None
cases_is_empty = True
def get_trend(rate: float) -> str:
if np.isnan(rate):
return np.NaN
if rate < 0.75:
status = "downward"
elif rate > 1.15:
status = "upward"
else:
status = "stable"
return status
for state in cases["state"].drop_duplicates():
cases_per_state = cases.query(f'state == "{state}"').reset_index(drop=True)
cases_per_state = cases_per_state.sort_values(by=["date"])
cases_per_state["confirmed_1d"] = cases_per_state["confirmed"].diff(periods=1)
cases_per_state["confirmed_moving_avg_7d"] = np.ceil(
cases_per_state["confirmed_1d"].rolling(window=7).mean()
)
cases_per_state["confirmed_moving_avg_7d_rate_14d"] = cases_per_state[
"confirmed_moving_avg_7d"
] / cases_per_state["confirmed_moving_avg_7d"].shift(periods=14)
cases_per_state["confirmed_trend"] = cases_per_state[
"confirmed_moving_avg_7d_rate_14d"
].apply(get_trend)
cases_per_state["deaths_1d"] = cases_per_state["deaths"].diff(periods=1)
cases_per_state["deaths_moving_avg_7d"] = np.ceil(
cases_per_state["deaths_1d"].rolling(window=7).mean()
)
cases_per_state["deaths_moving_avg_7d_rate_14d"] = cases_per_state[
"deaths_moving_avg_7d"
] / cases_per_state["deaths_moving_avg_7d"].shift(periods=14)
cases_per_state["deaths_trend"] = cases_per_state[
"deaths_moving_avg_7d_rate_14d"
].apply(get_trend)
if cases_is_empty:
cases_ = cases_per_state
cases_is_empty = False
else:
cases_ = cases_.append(cases_per_state, ignore_index=True)
cases = cases_
cases_ = None
cases["population"] = cases["population"].astype("Int64")
cases["confirmed_1d"] = cases["confirmed_1d"].astype("Int64")
cases["confirmed_moving_avg_7d"] = cases["confirmed_moving_avg_7d"].astype("Int64")
cases["deaths_1d"] = cases["deaths_1d"].astype("Int64")
cases["deaths_moving_avg_7d"] = cases["deaths_moving_avg_7d"].astype("Int64")
cases = cases[
[
"date",
"country",
"state",
"population",
"confirmed",
"confirmed_1d",
"confirmed_moving_avg_7d",
"confirmed_moving_avg_7d_rate_14d",
"confirmed_trend",
"deaths",
"deaths_1d",
"deaths_moving_avg_7d",
"deaths_moving_avg_7d_rate_14d",
"deaths_trend",
"month",
"year",
]
]
cases.head(n=2)
# ### 3.2\. Vacinação
# - preenchimendo dados faltantes,
# - seleção datas,
# - renomear colunas,
# - chaves temporais,
# - valores relativos,
# - consistência tipos dados,
# - reorganização ordem colunas,
vaccines = vaccines.fillna(method="ffill")
vaccines = vaccines[
(vaccines["date"] >= "2021-01-01") & (vaccines["date"] <= "2021-12-31")
].reset_index(drop=True)
vaccines = vaccines.rename(
columns={
"location": "country",
"total_vaccinations": "total",
"people_vaccinated": "one_shot",
"people_fully_vaccinated": "two_shots",
"total_boosters": "three_shots",
}
)
vaccines["month"] = vaccines["date"].apply(lambda date: date.strftime("%Y-%m"))
vaccines["year"] = vaccines["date"].apply(lambda date: date.strftime("%Y"))
vaccines["one_shot_perc"] = round(vaccines["one_shot"] / vaccines["population"], 4)
vaccines["two_shots_perc"] = round(vaccines["two_shots"] / vaccines["population"], 4)
vaccines["three_shots_perc"] = round(
vaccines["three_shots"] / vaccines["population"], 4
)
vaccines["population"] = vaccines["population"].astype("Int64")
vaccines["total"] = vaccines["total"].astype("Int64")
vaccines["one_shot"] = vaccines["one_shot"].astype("Int64")
vaccines["two_shots"] = vaccines["two_shots"].astype("Int64")
vaccines["three_shots"] = vaccines["three_shots"].astype("Int64")
vaccines = vaccines[
[
"date",
"country",
"population",
"total",
"one_shot",
"one_shot_perc",
"two_shots",
"two_shots_perc",
"three_shots",
"three_shots_perc",
"month",
"year",
]
]
# ## 4\. Carregamento
cases.to_csv("./covid-cases.csv", sep=",", index=False)
vaccines.to_csv("./covid-vaccines.csv", sep=",", index=False)
| false | 0 | 3,923 | 1 | 3,923 | 3,923 |
||
129828590
|
<jupyter_start><jupyter_text>Compressed 5 Percent
Kaggle dataset identifier: compressed-5-percent
<jupyter_script>import os
import rasterio as rs
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.colors as mc
import seaborn as sns
import folium
import branca
from sklearn.cluster import KMeans
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
from mpl_toolkits.basemap import Basemap
def get_fileTime(src):
return str(
src.name.split("/")[len(src.name.split("/")) - 1]
.replace(".tif", "")
.split("_")[1]
)
def ndre_(src):
NIR = src.read(5)
RedEdge = src.read(4)
NDRE = np.divide((NIR - RedEdge), (NIR + RedEdge))
if np.count_nonzero(np.isnan(NDRE)):
NDRE[np.isnan(NDRE)] = 0
return NDRE
def msavi_(src):
NIR = src.read(5)
Red = src.read(3)
MSAVI = (2 * NIR + 1 - np.sqrt((2 * NIR + 1) ** 2 - 8 * (NIR - Red))) / 2
# replace any negative values with 0
MSAVI[MSAVI < 0] = 0
return MSAVI
def gndvi_(src):
NIR = src.read(5)
Green = src.read(2)
GNDVI = (NIR - Green) / (NIR + Green)
return GNDVI
def ndvi_(src):
NIR = src.read(5)
Red = src.read(3)
NDVI = (NIR - Red) / (NIR + Red)
return NDVI
weights_may = {"ndvi": 0.303, "ndre": 0.16, "msavi": 0.263, "gndvi": 0.275}
weights_june = {"ndvi": 0.262, "ndre": 0.216, "msavi": 0.258, "gndvi": 0.263}
weights_july = {"ndvi": 0.257, "ndre": 0.24, "msavi": 0.25, "gndvi": 0.254}
weights_month = {"may": -2.069, "june": 1.642, "july": 1.427}
def get_weighted(san: str):
PATH = "/kaggle/input/compressed-5-percent"
folds = [san]
for f in folds:
vc_may = []
vc_june = []
vc_july = []
files = os.listdir(PATH + "/" + f)
for wheat in files:
data = rs.open(PATH + "/" + f + "/" + wheat)
ndvi = data.read(6)
ndre = ndre_(data)
msavi = msavi_(data)
gndvi = gndvi_(data)
if get_fileTime(data).split("-")[0] == "05":
vc_may = (
ndvi * weights_may["ndvi"]
+ ndre * weights_may["ndre"]
+ msavi * weights_may["msavi"]
+ gndvi * weights_may["gndvi"]
)
if get_fileTime(data).split("-")[0] == "06":
new_index_june = (
ndvi * weights_june["ndvi"]
+ ndre * weights_june["ndre"]
+ msavi * weights_june["msavi"]
+ gndvi * weights_june["gndvi"]
)
vc_june.append(new_index_june)
if get_fileTime(data).split("-")[0] == "07":
new_index_july = (
ndvi * weights_july["ndvi"]
+ ndre * weights_july["ndre"]
+ msavi * weights_july["msavi"]
+ gndvi * weights_july["gndvi"]
)
vc_july.append(new_index_july)
vc_june_ = np.mean(vc_june, axis=0)
vc_july_ = np.mean(vc_july, axis=0)
weighted_comb = (
vc_may * weights_month["may"]
+ vc_june_ * weights_month["june"]
+ vc_july_ * weights_month["july"]
)
return weighted_comb
def change_cent(labels, high_label, low_label):
lab = labels.copy()
temp = np.full(lab.shape, -1)
if high_label == 2 and low_label == 0:
temp = lab
if high_label == 2 and low_label == 1:
# 1 --> 0 AND 0 --> 1
temp[lab == 1] = 0
temp[lab == 0] = 1
temp[lab == 2] = 2
if high_label == 1 and low_label == 0:
# 1 --> 2 AND 2 --> 1
temp[lab == 1] = 2
temp[lab == 2] = 1
temp[lab == 0] = 0
if high_label == 1 and low_label == 2:
# 1 --> 2 AND 2 --> 0 AND 0 --> 1
temp[lab == 1] = 2
temp[lab == 2] = 0
temp[lab == 0] = 1
if high_label == 0 and low_label == 1:
# 0 --> 2 AND 1 --> 0 AND 2 --> 1
temp[lab == 0] = 2
temp[lab == 1] = 0
temp[lab == 2] = 1
if high_label == 0 and low_label == 2:
# 0 --> 2 AND 2 --> 0 AND 0 --> 1
temp[lab == 0] = 2
temp[lab == 2] = 0
temp[lab == 0] = 1
return temp
def run(pole: str):
coo = pd.read_csv(
"/kaggle/input/compress-5-percent-coordinates/" + pole + ".csv",
usecols=["x", "y"],
)
data = {"Long": coo["x"], "Lat": coo["y"], "vi": get_weighted(pole).flatten()}
return pd.DataFrame(data=data)
POLE = "27"
# # KMEANS
def plot_KMEANS(pole: str, prod):
df = run(pole)
kmeans = KMeans(n_clusters=3, n_init=10, random_state=0).fit(df)
centroids = [kmeans.cluster_centers_[i][2] for i in range(0, 3)]
high_label = np.argmax(centroids)
low_label = np.argmin(centroids)
labels = change_cent(kmeans.labels_, high_label, low_label)
# Plot
cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
"", ["#964B00", "yellow", "green"]
)
plt.scatter(df["Long"], df["Lat"], c=labels.reshape(-1, 1), cmap=cmap)
plt.colorbar(label="Weighted")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.title("KMEANS " + pole)
plt.show()
# For Correlation
prod.append(
{"n": pole, "yield": _yield.get(pole), "high": np.count_nonzero(labels == 2)}
)
return labels
# # Correlation between KMEANS AND HARVEST MAP
_yield = {"13": 6, "23": 9, "24": 6, "25": 8, "27": 9.5}
prod = []
folds = ["13", "23", "24", "25", "27"]
[plot_KMEANS(f, prod) for f in folds]
kmeans_corr = pd.DataFrame(prod)
kmeans_corr[["yield", "high"]].corr(method="pearson")
# ---
# # DBSCAN
def coordinated(pole: str):
df = run(pole)
llon, ulon = np.min(df["Long"]), np.max(df["Long"])
llat, ulat = np.min(df["Lat"]), np.max(df["Lat"])
my_map = Basemap(
projection="merc",
resolution="l",
area_thresh=1000.0,
llcrnrlon=llon,
llcrnrlat=llat,
urcrnrlon=ulon,
urcrnrlat=ulat,
)
xs, ys = my_map(np.asarray(df.Long), np.asarray(df.Lat))
df["xm"] = xs.tolist()
df["ym"] = ys.tolist()
val_scal = StandardScaler().fit_transform(df[["xm", "ym", "vi"]])
return df, val_scal
def plot_DBSCAN(pole: str):
df, val_scal = coordinated(pole)
db = DBSCAN(eps=0.16, min_samples=38).fit(val_scal)
labels = db.labels_
lab = labels.copy()
temp = np.full(lab.shape, -2)
n_clusters = len(set(lab)) - (1 if -1 in lab else 0)
for cluster in range(0, n_clusters): # Threshold
if np.mean(df.vi[labels == cluster]) >= 0.6:
temp[lab == cluster] = 2
elif (np.mean(df.vi[labels == cluster]) < 0.6) & (
np.mean(df.vi[labels == cluster]) >= 0.23
):
temp[lab == cluster] = 1
elif np.mean(df.vi[labels == cluster]) < 0.23:
temp[lab == cluster] = 0
temp[lab == -1] = 1
cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
"", ["#964B00", "yellow", "green"]
)
plt.scatter(df["Long"], df["Lat"], c=temp.reshape(-1, 1), cmap=cmap)
plt.colorbar(label="Weighted")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.title("DBSCAN " + pole)
plt.show()
return temp
# ---
# # Define Epsilon for DBSCAN using NearestNeighbors
from sklearn.neighbors import NearestNeighbors
from matplotlib import pyplot as plt
df, val_scal = coordinated(POLE)
neighbors = NearestNeighbors(n_neighbors=6)
neighbors_fit = neighbors.fit(val_scal)
distances, indices = neighbors_fit.kneighbors(val_scal)
distances = np.sort(distances, axis=0)
distances = distances[:, 1]
plt.plot(distances)
plt.show()
plt.xlim([20000, 21500])
plt.ylim([0, 0.25])
plt.plot(distances)
plt.axhline(y=0.16, color="r", linestyle="-")
plt.show()
# **Epsilon is 0.16**
# ---
# # Comparison KMEANS and DBSCAN
matrix1 = plot_KMEANS(POLE, prod)
matrix2 = plot_DBSCAN(POLE)
from sklearn.metrics.pairwise import cosine_similarity
data = {"kmeans": matrix1, "dbscan": matrix2}
df = pd.DataFrame(data=data)
cos_sim = cosine_similarity([df["kmeans"]], [df["dbscan"]])
print("Cosine Similarity: ", cos_sim)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/828/129828590.ipynb
|
compressed-5-percent
|
olzhasuikas
|
[{"Id": 129828590, "ScriptId": 38612651, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8938823, "CreationDate": "05/16/2023 19:15:20", "VersionNumber": 1.0, "Title": "Diploma Work Compress 5 Percent", "EvaluationDate": "05/16/2023", "IsChange": false, "TotalLines": 268.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 268.0, "LinesInsertedFromFork": 0.0, "LinesDeletedFromFork": 0.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 268.0, "TotalVotes": 1}]
|
[{"Id": 186207707, "KernelVersionId": 129828590, "SourceDatasetVersionId": 5698201}, {"Id": 186207708, "KernelVersionId": 129828590, "SourceDatasetVersionId": 5698213}]
|
[{"Id": 5698201, "DatasetId": 3276442, "DatasourceVersionId": 5773847, "CreatorUserId": 7667775, "LicenseName": "Unknown", "CreationDate": "05/16/2023 11:58:46", "VersionNumber": 1.0, "Title": "Compressed 5 Percent", "Slug": "compressed-5-percent", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3276442, "CreatorUserId": 7667775, "OwnerUserId": 7667775.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5698201.0, "CurrentDatasourceVersionId": 5773847.0, "ForumId": 3342123, "Type": 2, "CreationDate": "05/16/2023 11:58:46", "LastActivityDate": "05/16/2023", "TotalViews": 30, "TotalDownloads": 1, "TotalVotes": 0, "TotalKernels": 2}]
|
[{"Id": 7667775, "UserName": "olzhasuikas", "DisplayName": "Olzhas Uikas", "RegisterDate": "06/13/2021", "PerformanceTier": 0}]
|
import os
import rasterio as rs
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.colors as mc
import seaborn as sns
import folium
import branca
from sklearn.cluster import KMeans
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
from mpl_toolkits.basemap import Basemap
def get_fileTime(src):
return str(
src.name.split("/")[len(src.name.split("/")) - 1]
.replace(".tif", "")
.split("_")[1]
)
def ndre_(src):
NIR = src.read(5)
RedEdge = src.read(4)
NDRE = np.divide((NIR - RedEdge), (NIR + RedEdge))
if np.count_nonzero(np.isnan(NDRE)):
NDRE[np.isnan(NDRE)] = 0
return NDRE
def msavi_(src):
NIR = src.read(5)
Red = src.read(3)
MSAVI = (2 * NIR + 1 - np.sqrt((2 * NIR + 1) ** 2 - 8 * (NIR - Red))) / 2
# replace any negative values with 0
MSAVI[MSAVI < 0] = 0
return MSAVI
def gndvi_(src):
NIR = src.read(5)
Green = src.read(2)
GNDVI = (NIR - Green) / (NIR + Green)
return GNDVI
def ndvi_(src):
NIR = src.read(5)
Red = src.read(3)
NDVI = (NIR - Red) / (NIR + Red)
return NDVI
weights_may = {"ndvi": 0.303, "ndre": 0.16, "msavi": 0.263, "gndvi": 0.275}
weights_june = {"ndvi": 0.262, "ndre": 0.216, "msavi": 0.258, "gndvi": 0.263}
weights_july = {"ndvi": 0.257, "ndre": 0.24, "msavi": 0.25, "gndvi": 0.254}
weights_month = {"may": -2.069, "june": 1.642, "july": 1.427}
def get_weighted(san: str):
PATH = "/kaggle/input/compressed-5-percent"
folds = [san]
for f in folds:
vc_may = []
vc_june = []
vc_july = []
files = os.listdir(PATH + "/" + f)
for wheat in files:
data = rs.open(PATH + "/" + f + "/" + wheat)
ndvi = data.read(6)
ndre = ndre_(data)
msavi = msavi_(data)
gndvi = gndvi_(data)
if get_fileTime(data).split("-")[0] == "05":
vc_may = (
ndvi * weights_may["ndvi"]
+ ndre * weights_may["ndre"]
+ msavi * weights_may["msavi"]
+ gndvi * weights_may["gndvi"]
)
if get_fileTime(data).split("-")[0] == "06":
new_index_june = (
ndvi * weights_june["ndvi"]
+ ndre * weights_june["ndre"]
+ msavi * weights_june["msavi"]
+ gndvi * weights_june["gndvi"]
)
vc_june.append(new_index_june)
if get_fileTime(data).split("-")[0] == "07":
new_index_july = (
ndvi * weights_july["ndvi"]
+ ndre * weights_july["ndre"]
+ msavi * weights_july["msavi"]
+ gndvi * weights_july["gndvi"]
)
vc_july.append(new_index_july)
vc_june_ = np.mean(vc_june, axis=0)
vc_july_ = np.mean(vc_july, axis=0)
weighted_comb = (
vc_may * weights_month["may"]
+ vc_june_ * weights_month["june"]
+ vc_july_ * weights_month["july"]
)
return weighted_comb
def change_cent(labels, high_label, low_label):
lab = labels.copy()
temp = np.full(lab.shape, -1)
if high_label == 2 and low_label == 0:
temp = lab
if high_label == 2 and low_label == 1:
# 1 --> 0 AND 0 --> 1
temp[lab == 1] = 0
temp[lab == 0] = 1
temp[lab == 2] = 2
if high_label == 1 and low_label == 0:
# 1 --> 2 AND 2 --> 1
temp[lab == 1] = 2
temp[lab == 2] = 1
temp[lab == 0] = 0
if high_label == 1 and low_label == 2:
# 1 --> 2 AND 2 --> 0 AND 0 --> 1
temp[lab == 1] = 2
temp[lab == 2] = 0
temp[lab == 0] = 1
if high_label == 0 and low_label == 1:
# 0 --> 2 AND 1 --> 0 AND 2 --> 1
temp[lab == 0] = 2
temp[lab == 1] = 0
temp[lab == 2] = 1
if high_label == 0 and low_label == 2:
# 0 --> 2 AND 2 --> 0 AND 0 --> 1
temp[lab == 0] = 2
temp[lab == 2] = 0
temp[lab == 0] = 1
return temp
def run(pole: str):
coo = pd.read_csv(
"/kaggle/input/compress-5-percent-coordinates/" + pole + ".csv",
usecols=["x", "y"],
)
data = {"Long": coo["x"], "Lat": coo["y"], "vi": get_weighted(pole).flatten()}
return pd.DataFrame(data=data)
POLE = "27"
# # KMEANS
def plot_KMEANS(pole: str, prod):
df = run(pole)
kmeans = KMeans(n_clusters=3, n_init=10, random_state=0).fit(df)
centroids = [kmeans.cluster_centers_[i][2] for i in range(0, 3)]
high_label = np.argmax(centroids)
low_label = np.argmin(centroids)
labels = change_cent(kmeans.labels_, high_label, low_label)
# Plot
cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
"", ["#964B00", "yellow", "green"]
)
plt.scatter(df["Long"], df["Lat"], c=labels.reshape(-1, 1), cmap=cmap)
plt.colorbar(label="Weighted")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.title("KMEANS " + pole)
plt.show()
# For Correlation
prod.append(
{"n": pole, "yield": _yield.get(pole), "high": np.count_nonzero(labels == 2)}
)
return labels
# # Correlation between KMEANS AND HARVEST MAP
_yield = {"13": 6, "23": 9, "24": 6, "25": 8, "27": 9.5}
prod = []
folds = ["13", "23", "24", "25", "27"]
[plot_KMEANS(f, prod) for f in folds]
kmeans_corr = pd.DataFrame(prod)
kmeans_corr[["yield", "high"]].corr(method="pearson")
# ---
# # DBSCAN
def coordinated(pole: str):
df = run(pole)
llon, ulon = np.min(df["Long"]), np.max(df["Long"])
llat, ulat = np.min(df["Lat"]), np.max(df["Lat"])
my_map = Basemap(
projection="merc",
resolution="l",
area_thresh=1000.0,
llcrnrlon=llon,
llcrnrlat=llat,
urcrnrlon=ulon,
urcrnrlat=ulat,
)
xs, ys = my_map(np.asarray(df.Long), np.asarray(df.Lat))
df["xm"] = xs.tolist()
df["ym"] = ys.tolist()
val_scal = StandardScaler().fit_transform(df[["xm", "ym", "vi"]])
return df, val_scal
def plot_DBSCAN(pole: str):
df, val_scal = coordinated(pole)
db = DBSCAN(eps=0.16, min_samples=38).fit(val_scal)
labels = db.labels_
lab = labels.copy()
temp = np.full(lab.shape, -2)
n_clusters = len(set(lab)) - (1 if -1 in lab else 0)
for cluster in range(0, n_clusters): # Threshold
if np.mean(df.vi[labels == cluster]) >= 0.6:
temp[lab == cluster] = 2
elif (np.mean(df.vi[labels == cluster]) < 0.6) & (
np.mean(df.vi[labels == cluster]) >= 0.23
):
temp[lab == cluster] = 1
elif np.mean(df.vi[labels == cluster]) < 0.23:
temp[lab == cluster] = 0
temp[lab == -1] = 1
cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
"", ["#964B00", "yellow", "green"]
)
plt.scatter(df["Long"], df["Lat"], c=temp.reshape(-1, 1), cmap=cmap)
plt.colorbar(label="Weighted")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.title("DBSCAN " + pole)
plt.show()
return temp
# ---
# # Define Epsilon for DBSCAN using NearestNeighbors
from sklearn.neighbors import NearestNeighbors
from matplotlib import pyplot as plt
df, val_scal = coordinated(POLE)
neighbors = NearestNeighbors(n_neighbors=6)
neighbors_fit = neighbors.fit(val_scal)
distances, indices = neighbors_fit.kneighbors(val_scal)
distances = np.sort(distances, axis=0)
distances = distances[:, 1]
plt.plot(distances)
plt.show()
plt.xlim([20000, 21500])
plt.ylim([0, 0.25])
plt.plot(distances)
plt.axhline(y=0.16, color="r", linestyle="-")
plt.show()
# **Epsilon is 0.16**
# ---
# # Comparison KMEANS and DBSCAN
matrix1 = plot_KMEANS(POLE, prod)
matrix2 = plot_DBSCAN(POLE)
from sklearn.metrics.pairwise import cosine_similarity
data = {"kmeans": matrix1, "dbscan": matrix2}
df = pd.DataFrame(data=data)
cos_sim = cosine_similarity([df["kmeans"]], [df["dbscan"]])
print("Cosine Similarity: ", cos_sim)
| false | 0 | 2,890 | 1 | 2,913 | 2,890 |
||
129828977
|
<jupyter_start><jupyter_text>aerial dense link
Kaggle dataset identifier: aerial-dense-link
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
fpneff = pd.read_csv("/kaggle/input/aerial/FPN/fpn_efficientnetb5.csv")
import numpy as np
from matplotlib import pyplot as plt
import math
import matplotlib
fig = plt.figure(figsize=(12, 5))
fig.suptitle("Accuracy Curve")
ax1 = fig.add_subplot()
# ax1.plot(incep["val_accuracy"])
# quadratic = [num**2 for num in x]
# cubic = [num**3 for num in x
# ax1.plot(res["accuracy"])
# ax1.plot(pd.read_csv("/kaggle/input/aerial/pspnet/PSP_mobilenetv2.csv")["val_my_mean_iou"])
# ax1.plot(xpe["accuracy"])
ax1.plot(fpneff["val_my_mean_iou"])
ax1.plot(fpneff["my_mean_iou"])
# ax1.plot(vgg["accuracy"])
# ax1.plot(vgg["val_accuracy"]*0.97)
plt.xlabel("No. of Epoch", fontsize=20)
plt.ylabel("mIOU", fontsize=20)
plt.legend(["Val_MIOU", "MIOU"], loc="lower right", prop={"size": 15})
# ax1.legend(loc = 'upper left')
# ax2.legend(loc = 'upper right')
# plt.ylim(0.5,1.0)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/828/129828977.ipynb
|
aerial-dense-link
|
shub20
|
[{"Id": 129828977, "ScriptId": 37295213, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7428745, "CreationDate": "05/16/2023 19:20:15", "VersionNumber": 1.0, "Title": "notebooka12a97b8d0", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 53.0, "LinesInsertedFromPrevious": 53.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186208199, "KernelVersionId": 129828977, "SourceDatasetVersionId": 5489488}, {"Id": 186208198, "KernelVersionId": 129828977, "SourceDatasetVersionId": 5415364}]
|
[{"Id": 5489488, "DatasetId": 3138758, "DatasourceVersionId": 5563841, "CreatorUserId": 7428745, "LicenseName": "Unknown", "CreationDate": "04/22/2023 18:55:28", "VersionNumber": 3.0, "Title": "aerial dense link", "Slug": "aerial-dense-link", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-04-22", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3138758, "CreatorUserId": 7428745, "OwnerUserId": 7428745.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5489488.0, "CurrentDatasourceVersionId": 5563841.0, "ForumId": 3202507, "Type": 2, "CreationDate": "04/16/2023 09:58:58", "LastActivityDate": "04/16/2023", "TotalViews": 44, "TotalDownloads": 0, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 7428745, "UserName": "shub20", "DisplayName": "Shub-20", "RegisterDate": "05/16/2021", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
fpneff = pd.read_csv("/kaggle/input/aerial/FPN/fpn_efficientnetb5.csv")
import numpy as np
from matplotlib import pyplot as plt
import math
import matplotlib
fig = plt.figure(figsize=(12, 5))
fig.suptitle("Accuracy Curve")
ax1 = fig.add_subplot()
# ax1.plot(incep["val_accuracy"])
# quadratic = [num**2 for num in x]
# cubic = [num**3 for num in x
# ax1.plot(res["accuracy"])
# ax1.plot(pd.read_csv("/kaggle/input/aerial/pspnet/PSP_mobilenetv2.csv")["val_my_mean_iou"])
# ax1.plot(xpe["accuracy"])
ax1.plot(fpneff["val_my_mean_iou"])
ax1.plot(fpneff["my_mean_iou"])
# ax1.plot(vgg["accuracy"])
# ax1.plot(vgg["val_accuracy"]*0.97)
plt.xlabel("No. of Epoch", fontsize=20)
plt.ylabel("mIOU", fontsize=20)
plt.legend(["Val_MIOU", "MIOU"], loc="lower right", prop={"size": 15})
# ax1.legend(loc = 'upper left')
# ax2.legend(loc = 'upper right')
# plt.ylim(0.5,1.0)
plt.show()
| false | 1 | 480 | 0 | 504 | 480 |
||
129828141
|
import glob
import os
import numpy as np
import random
import matplotlib.pyplot as plt
import imageio.v2 as imageio
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import transforms as T
from torchvision.transforms import functional as TF
from torch.utils.data import Dataset, DataLoader
from PIL import Image
# from torchsummary import summary
from tqdm import tqdm
# from cityscapesscripts.helpers.labels import trainId2label as t2l
import cv2
# from sklearn.model_selection import train_test_split
from torchmetrics.classification import MulticlassJaccardIndex
NUM_CLASSES = 20
mapping_20 = {
0: 0,
1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 1,
8: 2,
9: 0,
10: 0,
11: 3,
12: 4,
13: 5,
14: 0,
15: 0,
16: 0,
17: 6,
18: 0,
19: 7,
20: 8,
21: 9,
22: 10,
23: 11,
24: 12,
25: 13,
26: 14,
27: 15,
28: 16,
29: 0,
30: 0,
31: 17,
32: 18,
33: 19,
-1: 0,
}
from torchvision.models.segmentation import (
deeplabv3_resnet50,
DeepLabV3_ResNet50_Weights,
)
class CityscapesDataset(Dataset):
def __init__(self, image_dir, label_dir):
# self.labelcolorpaths=[]
self.image_dir = image_dir
self.label_dir = label_dir
self.imagepaths = sorted(glob.glob(self.image_dir))
labelpaths = sorted(glob.glob(self.label_dir))
self.label_paths = []
for img in labelpaths:
if "labelIds" in os.path.basename(img):
self.label_paths.append(img)
# dir = [os.path.join(label_dir, dir) for dir in sorted(os.listdir(label_dir))]
# for dir1 in dir:
# labelpaths = [os.path.join(dir1, dir) for dir in sorted(os.listdir(dir1))]
# for img in labelpaths:
# if 'color' in os.path.basename(img):
# self.labelcolorpaths.append(img)
def __len__(self):
return len(self.imagepaths)
def __getitem__(self, idx):
image = imageio.imread(self.imagepaths[idx])
mask = imageio.imread(self.label_paths[idx])
# mask_color = imageio.imread(self.labelcolorpaths[idx])
image = cv2.resize(image, (512, 256))
mask = cv2.resize(mask, (512, 256))
# mask_color = cv2.resize(mask_color, (220,110))
for i, j in np.ndindex(mask.shape):
mask[i][j] = mapping_20[mask[i][j]]
img = torch.tensor(image, dtype=torch.float32)
img = torch.tensor(img.tolist())
img = img.permute(2, 0, 1)
mask = torch.tensor(mask, dtype=torch.uint8)
mask = torch.tensor(mask.tolist())
# mask_color = torch.tensor(mask_color, dtype=torch.float32)
# mask_color = torch.tensor(mask_color.tolist())
# mask_color = mask_color.permute(2, 0, 1)
return img, mask # , mask_color
# Definisci il numero di epoche e il learning rate
epochs = 4
learning_rate = 1e-3
# e il dispositivo su cui far girare il codice ('cuda' o 'cpu')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Definire la loss function, che può essere custom o presa dal pacchetto nn
batch_size = 16
weights = DeepLabV3_ResNet50_Weights.DEFAULT
model = deeplabv3_resnet50(weights, num_class=20).to(device)
# definire l'ottimizzatore
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
train_dir = "/kaggle/input/cityscapes/Cityspaces/images/train/*/*"
label_train_dir = "/kaggle/input/cityscapes/Cityspaces/gtFine/train/*/*"
val_dir = "/kaggle/input/cityscapes/Cityspaces/images/val/*/*"
label_val_dir = "/kaggle/input/cityscapes/Cityspaces/gtFine/val/*/*"
train_dataset = CityscapesDataset(image_dir=train_dir, label_dir=label_train_dir)
valid_dataset = CityscapesDataset(image_dir=val_dir, label_dir=label_val_dir)
train_dataset, test_train_dataset = torch.utils.data.random_split(
train_dataset, [0.8, 0.2]
)
valid_dataset, test_valid_dataset = torch.utils.data.random_split(
valid_dataset, [0.8, 0.2]
)
test_dataset = test_train_dataset + test_valid_dataset
# divisione del dataset da aggiustare
# train_dataset.__getitem__(3)
# Get train and val data loaders
test_loader = DataLoader(test_dataset)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=True)
from torchvision import transforms
random_idx = random.randint(0, len(test_loader) - 1)
image, mask = test_dataset[random_idx]
jaccardk = MulticlassJaccardIndex(num_classes=20)
test_iou_score = 0
model.eval()
with torch.no_grad():
for idx, (data, target) in enumerate(test_loader):
preprocess = transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
)
input_tensor = preprocess(data)
image = data
output = model(input_batch)["out"][0]
output_predictions = output.argmax(0)
# IoU evaluation
target = target.squeeze()
test_iou = jaccardk(output_predictions, target)
image = image.squeeze()
image = image[0].cpu().numpy()
mask = target.cpu().numpy()
out = output_predictions.cpu().numpy()
plt.figure()
fig, axs = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(8, 3))
axs[0].imshow(im)
axs[0].set_title("Immagine RGB", fontsize=10, fontweight="bold")
axs[1].imshow(mask)
axs[1].set_title("Ground truth", fontsize=10, fontweight="bold")
axs[2].imshow(out)
axs[2].set_title("Prediction", fontsize=10, fontweight="bold")
if idx % 5 == 0:
plt.close("all")
print(f"Test IoU = {test_iou}")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/828/129828141.ipynb
| null | null |
[{"Id": 129828141, "ScriptId": 38612470, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12076428, "CreationDate": "05/16/2023 19:10:24", "VersionNumber": 1.0, "Title": "TFLearning", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 195.0, "LinesInsertedFromPrevious": 195.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import glob
import os
import numpy as np
import random
import matplotlib.pyplot as plt
import imageio.v2 as imageio
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import transforms as T
from torchvision.transforms import functional as TF
from torch.utils.data import Dataset, DataLoader
from PIL import Image
# from torchsummary import summary
from tqdm import tqdm
# from cityscapesscripts.helpers.labels import trainId2label as t2l
import cv2
# from sklearn.model_selection import train_test_split
from torchmetrics.classification import MulticlassJaccardIndex
NUM_CLASSES = 20
mapping_20 = {
0: 0,
1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 1,
8: 2,
9: 0,
10: 0,
11: 3,
12: 4,
13: 5,
14: 0,
15: 0,
16: 0,
17: 6,
18: 0,
19: 7,
20: 8,
21: 9,
22: 10,
23: 11,
24: 12,
25: 13,
26: 14,
27: 15,
28: 16,
29: 0,
30: 0,
31: 17,
32: 18,
33: 19,
-1: 0,
}
from torchvision.models.segmentation import (
deeplabv3_resnet50,
DeepLabV3_ResNet50_Weights,
)
class CityscapesDataset(Dataset):
def __init__(self, image_dir, label_dir):
# self.labelcolorpaths=[]
self.image_dir = image_dir
self.label_dir = label_dir
self.imagepaths = sorted(glob.glob(self.image_dir))
labelpaths = sorted(glob.glob(self.label_dir))
self.label_paths = []
for img in labelpaths:
if "labelIds" in os.path.basename(img):
self.label_paths.append(img)
# dir = [os.path.join(label_dir, dir) for dir in sorted(os.listdir(label_dir))]
# for dir1 in dir:
# labelpaths = [os.path.join(dir1, dir) for dir in sorted(os.listdir(dir1))]
# for img in labelpaths:
# if 'color' in os.path.basename(img):
# self.labelcolorpaths.append(img)
def __len__(self):
return len(self.imagepaths)
def __getitem__(self, idx):
image = imageio.imread(self.imagepaths[idx])
mask = imageio.imread(self.label_paths[idx])
# mask_color = imageio.imread(self.labelcolorpaths[idx])
image = cv2.resize(image, (512, 256))
mask = cv2.resize(mask, (512, 256))
# mask_color = cv2.resize(mask_color, (220,110))
for i, j in np.ndindex(mask.shape):
mask[i][j] = mapping_20[mask[i][j]]
img = torch.tensor(image, dtype=torch.float32)
img = torch.tensor(img.tolist())
img = img.permute(2, 0, 1)
mask = torch.tensor(mask, dtype=torch.uint8)
mask = torch.tensor(mask.tolist())
# mask_color = torch.tensor(mask_color, dtype=torch.float32)
# mask_color = torch.tensor(mask_color.tolist())
# mask_color = mask_color.permute(2, 0, 1)
return img, mask # , mask_color
# Definisci il numero di epoche e il learning rate
epochs = 4
learning_rate = 1e-3
# e il dispositivo su cui far girare il codice ('cuda' o 'cpu')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Definire la loss function, che può essere custom o presa dal pacchetto nn
batch_size = 16
weights = DeepLabV3_ResNet50_Weights.DEFAULT
model = deeplabv3_resnet50(weights, num_class=20).to(device)
# definire l'ottimizzatore
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
train_dir = "/kaggle/input/cityscapes/Cityspaces/images/train/*/*"
label_train_dir = "/kaggle/input/cityscapes/Cityspaces/gtFine/train/*/*"
val_dir = "/kaggle/input/cityscapes/Cityspaces/images/val/*/*"
label_val_dir = "/kaggle/input/cityscapes/Cityspaces/gtFine/val/*/*"
train_dataset = CityscapesDataset(image_dir=train_dir, label_dir=label_train_dir)
valid_dataset = CityscapesDataset(image_dir=val_dir, label_dir=label_val_dir)
train_dataset, test_train_dataset = torch.utils.data.random_split(
train_dataset, [0.8, 0.2]
)
valid_dataset, test_valid_dataset = torch.utils.data.random_split(
valid_dataset, [0.8, 0.2]
)
test_dataset = test_train_dataset + test_valid_dataset
# divisione del dataset da aggiustare
# train_dataset.__getitem__(3)
# Get train and val data loaders
test_loader = DataLoader(test_dataset)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=True)
from torchvision import transforms
random_idx = random.randint(0, len(test_loader) - 1)
image, mask = test_dataset[random_idx]
jaccardk = MulticlassJaccardIndex(num_classes=20)
test_iou_score = 0
model.eval()
with torch.no_grad():
for idx, (data, target) in enumerate(test_loader):
preprocess = transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
)
input_tensor = preprocess(data)
image = data
output = model(input_batch)["out"][0]
output_predictions = output.argmax(0)
# IoU evaluation
target = target.squeeze()
test_iou = jaccardk(output_predictions, target)
image = image.squeeze()
image = image[0].cpu().numpy()
mask = target.cpu().numpy()
out = output_predictions.cpu().numpy()
plt.figure()
fig, axs = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(8, 3))
axs[0].imshow(im)
axs[0].set_title("Immagine RGB", fontsize=10, fontweight="bold")
axs[1].imshow(mask)
axs[1].set_title("Ground truth", fontsize=10, fontweight="bold")
axs[2].imshow(out)
axs[2].set_title("Prediction", fontsize=10, fontweight="bold")
if idx % 5 == 0:
plt.close("all")
print(f"Test IoU = {test_iou}")
| false | 0 | 1,936 | 0 | 1,936 | 1,936 |
||
129828134
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for the graphs
import seaborn as sns
plt.style.use("ggplot")
import nltk
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## Read In Data
# Read in data in a data frame
df = pd.read_csv("../input/movie-reviews-dataset/ALL_AUDIENCE_REVIEWS.csv")
df.head()
df["reviewContent"].values[0]
print(df.shape) # 1100 rows, 7 columns
df = df.head(550)
df.head()
# ## Quick Exploratory Data Analysis (EDA)
ax = (
df["reviewRating"]
.value_counts()
.sort_index()
.plot(kind="bar", title="Count of Reviews by Ratings", figsize=(10, 5))
)
ax.set_xlabel("Review Ratings")
plt.show()
# ## Basic NLTK
example = df["reviewContent"][50]
print(example)
tokens = nltk.word_tokenize(example)
tokens[:10]
tagged = nltk.pos_tag(tokens)
tagged[:10]
entities = nltk.chunk.ne_chunk(tagged)
entities.pprint()
# ## Sentiment Analysis Version 1: Using VADER
# VADER (Valence Aware Dictionary and sEntiment Reasoner) - Bag of words approach
# > Using NLTK's `SentimentIntensityAnalyzer` to get the neg/neu/pos scores of the text.
# * This uses a "bag of words approach:
# 1. Stop words are removed (e.g. and, the) - just words used for structure
# 2. each word is scored and combined to a total score.
# *Note: This does not include relationship between words.
from nltk.sentiment import SentimentIntensityAnalyzer
from tqdm.notebook import tqdm
sia = SentimentIntensityAnalyzer()
sia.polarity_scores("You look lonely, I can fix that!")
sia.polarity_scores("League of Legends is so fun xd")
sia.polarity_scores(example)
# Run the polarity score on the entire dataset
result = {}
for i, row in tqdm(df.iterrows(), total=len(df)):
text = row["reviewContent"]
myid = str(i + 1)
# myid = row['ID']
result[myid] = sia.polarity_scores(text)
result_10 = dict(list(result.items())[:10])
result_10
vaders = pd.DataFrame(result).T
vaders = vaders.reset_index().rename(columns={"index": "ID"})
vaders = vaders.merge(df, how="left")
vaders
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/828/129828134.ipynb
| null | null |
[{"Id": 129828134, "ScriptId": 38604764, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15133337, "CreationDate": "05/16/2023 19:10:21", "VersionNumber": 2.0, "Title": "Sentiment Analysis CSS2", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 100.0, "LinesInsertedFromPrevious": 29.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 71.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for the graphs
import seaborn as sns
plt.style.use("ggplot")
import nltk
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# ## Read In Data
# Read in data in a data frame
df = pd.read_csv("../input/movie-reviews-dataset/ALL_AUDIENCE_REVIEWS.csv")
df.head()
df["reviewContent"].values[0]
print(df.shape) # 1100 rows, 7 columns
df = df.head(550)
df.head()
# ## Quick Exploratory Data Analysis (EDA)
ax = (
df["reviewRating"]
.value_counts()
.sort_index()
.plot(kind="bar", title="Count of Reviews by Ratings", figsize=(10, 5))
)
ax.set_xlabel("Review Ratings")
plt.show()
# ## Basic NLTK
example = df["reviewContent"][50]
print(example)
tokens = nltk.word_tokenize(example)
tokens[:10]
tagged = nltk.pos_tag(tokens)
tagged[:10]
entities = nltk.chunk.ne_chunk(tagged)
entities.pprint()
# ## Sentiment Analysis Version 1: Using VADER
# VADER (Valence Aware Dictionary and sEntiment Reasoner) - Bag of words approach
# > Using NLTK's `SentimentIntensityAnalyzer` to get the neg/neu/pos scores of the text.
# * This uses a "bag of words approach:
# 1. Stop words are removed (e.g. and, the) - just words used for structure
# 2. each word is scored and combined to a total score.
# *Note: This does not include relationship between words.
from nltk.sentiment import SentimentIntensityAnalyzer
from tqdm.notebook import tqdm
sia = SentimentIntensityAnalyzer()
sia.polarity_scores("You look lonely, I can fix that!")
sia.polarity_scores("League of Legends is so fun xd")
sia.polarity_scores(example)
# Run the polarity score on the entire dataset
result = {}
for i, row in tqdm(df.iterrows(), total=len(df)):
text = row["reviewContent"]
myid = str(i + 1)
# myid = row['ID']
result[myid] = sia.polarity_scores(text)
result_10 = dict(list(result.items())[:10])
result_10
vaders = pd.DataFrame(result).T
vaders = vaders.reset_index().rename(columns={"index": "ID"})
vaders = vaders.merge(df, how="left")
vaders
| false | 0 | 814 | 0 | 814 | 814 |
||
129472577
|
<jupyter_start><jupyter_text>Brain Tumor Classification (MRI)
# Contribute to OpenSource
##Repo: [GitHub](https://github.com/SartajBhuvaji/Brain-Tumor-Classification-Using-Deep-Learning-Algorithms)
## Read Me: [Link](https://github.com/SartajBhuvaji/Brain-Tumor-Classification-Using-Deep-Learning-Algorithms/tree/master#contributing-to-the-project)
# Abstract
A Brain tumor is considered as one of the aggressive diseases, among children and adults. Brain tumors account for 85 to 90 percent of all primary Central Nervous System(CNS) tumors. Every year, around 11,700 people are diagnosed with a brain tumor. The 5-year survival rate for people with a cancerous brain or CNS tumor is approximately 34 percent for men and36 percent for women. Brain Tumors are classified as: Benign Tumor, Malignant Tumor, Pituitary Tumor, etc. Proper treatment, planning, and accurate diagnostics should be implemented to improve the life expectancy of the patients. The best technique to detect brain tumors is Magnetic Resonance Imaging (MRI). A huge amount of image data is generated through the scans. These images are examined by the radiologist. A manual examination can be error-prone due to the level of complexities involved in brain tumors and their properties.
Application of automated classification techniques using Machine Learning(ML) and Artificial Intelligence(AI)has consistently shown higher accuracy than manual classification. Hence, proposing a system performing detection and classification by using Deep Learning Algorithms using ConvolutionNeural Network (CNN), Artificial Neural Network (ANN), and TransferLearning (TL) would be helpful to doctors all around the world.
### Context
Brain Tumors are complex. There are a lot of abnormalities in the sizes and location of the brain tumor(s). This makes it really difficult for complete understanding of the nature of the tumor. Also, a professional Neurosurgeon is required for MRI analysis. Often times in developing countries the lack of skillful doctors and lack of knowledge about tumors makes it really challenging and time-consuming to generate reports from MRI’. So an automated system on Cloud can solve this problem.
### Definition
To Detect and Classify Brain Tumor using, CNN and TL; as an asset of Deep Learning and to examine the tumor position(segmentation).
Kaggle dataset identifier: brain-tumor-classification-mri
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from typing import Tuple, List
import tensorflow as tf
from tensorflow import image as tfi
from keras.models import Sequential, Model
from keras.layers import (
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
GlobalAveragePooling2D,
)
from keras.regularizers import l2
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.applications import ResNet50V2
from tensorflow.keras.applications import ResNet152V2
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import MobileNetV2
import cv2 as cv
from tqdm import tqdm
from IPython.display import clear_output as cls
import os
from glob import glob
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
cls()
image_path = (
"/kaggle/input/brain-tumor-classification-mri/Testing/glioma_tumor/image(4).jpg"
)
image = cv.imread(image_path)
plt.imshow(image)
plt.axis("off")
plt.show()
train_dir = "/kaggle/input/brain-tumor-classification-mri/Training"
valid_dir = "/kaggle/input/brain-tumor-classification-mri/Testing"
# Get all class names and count the number of classes
class_names = os.listdir(train_dir)
n_classes = len(class_names)
# Set some constants for the dataset
BATCH_SIZE = 32 # Number of samples in each batch during training
IMG_SIZE = 224 # Size of the image
AUTOTUNE = tf.data.AUTOTUNE # Set to optimize the buffer size automatically
LEARNING_RATE = 1e-3 # Learning rate for the optimizer used during model training
# Set the random seed for reproducibility
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
def load_image(image_path: str) -> tf.Tensor:
"""
The task of the function is to load the image present in the specified given image path. Loading the image the function also performedsome
preprocessing steps such as resizing and normalization.
Argument:
image_path(str) : This is a string which represents the location of the image file to be loaded.
Returns:
image(tf.Tensor) : This is the image which is loaded from the given image part in the form of a tensor.
"""
# Check if image path exists
assert os.path.exists(image_path), f"Invalid image path: {image_path}"
# Load the image
image = plt.imread(image_path)
# Resize the Image
image = tfi.resize(image, (IMG_SIZE, IMG_SIZE))
# Convert image data type to tf.float32
image = tf.cast(image, tf.float32)
# Normalize the image to bring pixel values between 0 - 1
image = image / 255.0
return image
def load_dataset(
root_path: str, class_names: list, batch_size: int = 32, buffer_size: int = 1000
) -> Tuple[np.ndarray, np.ndarray]:
"""
Load and preprocess images from the given root path and return them as numpy arrays.
Args:
root_path (str): Path to the root directory where all the subdirectories (class names) are present.
class_names (list): List of the names of all the subdirectories (class names).
batch_size (int): Batch size of the final dataset. Defaults to 32.
buffer_size (int): Buffer size to use when shuffling the data. Defaults to 1000.
Returns:
Two numpy arrays, one containing the images and the other containing their respective labels.
"""
# Collect total number of data samples
n_samples = sum(
[len(os.listdir(os.path.join(root_path, name))) for name in class_names]
)
# Create arrays to store images and labels
images = np.empty(shape=(n_samples, IMG_SIZE, IMG_SIZE, 3), dtype=np.float32)
labels = np.empty(shape=(n_samples, 1), dtype=np.int32)
# Loop over all the image file paths, load and store the images with respective labels
n_image = 0
for class_name in tqdm(class_names, desc="Loading"):
class_path = os.path.join(root_path, class_name)
for file_path in glob(os.path.join(class_path, "*")):
# Load the image
image = load_image(file_path)
# Assign label
label = class_names.index(class_name)
# Store the image and the respective label
images[n_image] = image
labels[n_image] = label
# Increment the number of images processed
n_image += 1
# Shuffle the data
indices = np.random.permutation(n_samples)
images = images[indices]
labels = labels[indices]
return images, labels
X_train, y_train = load_dataset(root_path=train_dir, class_names=class_names)
X_valid, y_valid = load_dataset(root_path=valid_dir, class_names=class_names)
X_train.shape, y_train.shape
def show_images(
images: np.ndarray,
labels: np.ndarray,
n_rows: int = 1,
n_cols: int = 5,
figsize: tuple = (25, 8),
model: tf.keras.Model = None,
) -> None:
"""
Plots a grid of random images and their corresponding labels, with an optional prediction from a given model.
Args:
images (np.ndarray): Array of images to plot.
labels (np.ndarray): Array of labels corresponding to the images.
n_rows (int): Number of rows in the plot grid. Default is 1.
n_cols (int): Number of columns in the plot grid. Default is 5.
figsize (tuple): A tuple specifying the size of the figure. Default is (25, 8).
model (tf.keras.Model): A Keras model object used to make predictions on the images. Default is None.
Returns:
None
"""
# Loop over each row of the plot
for row in range(n_rows):
# Create a new figure for each row
plt.figure(figsize=figsize)
# Generate a random index for each column in the row
rand_indices = np.random.choice(len(images), size=n_cols, replace=False)
# Loop over each column of the plot
for col, index in enumerate(rand_indices):
# Get the image and label at the random index
image = images[index]
label = class_names[int(labels[index])]
# If a model is provided, make a prediction on the image
if model:
prediction = model.predict(
np.expand_dims(tf.squeeze(image), axis=0), verbose=0
)[0]
label += f"\nPrediction: {class_names[np.argmax(prediction)]}"
# Plot the image and label
plt.subplot(1, n_cols, col + 1)
plt.imshow(image)
plt.title(label.title())
plt.axis("off")
# Show the row of images
plt.show()
show_images(images=X_train, labels=y_train, n_rows=3)
# Collect all backbones
BACKBONES = [
ResNet50V2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
ResNet152V2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
InceptionV3(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
Xception(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
MobileNetV2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
VGG16(input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False),
]
# Define all the backbone names. This will be later used during visualization
BACKBONES_NAMES = [
"ResNet50V2",
"ResNet152V2",
"InceptionV3",
"Xception",
"MobileNetV2",
"VGG16",
]
# Freeze the weights of all the backbones
for backbone in BACKBONES:
backbone.trainable = False
cls()
# Set the size of the subset
subset_size = 1000
# Generate a random subset of indices
subset_indices_train = np.random.choice(len(X_train), size=subset_size, replace=False)
# Use the indices to extract a subset of the training data
X_sub, y_sub = X_train[subset_indices_train], y_train[subset_indices_train]
# Initialize an empty list to hold the histories of each backbone architecture.
HISTORIES = []
# Loop over every backbone in the BACKBONES list.
for backbone in tqdm(BACKBONES, desc="Training Backbone"):
# Create the simplest model architecture using the current backbone.
model = Sequential(
[
backbone,
GlobalAveragePooling2D(),
Dropout(0.5),
Dense(n_classes, activation="softmax"),
]
)
# Compile the model with the specified loss function, optimizer, and metrics.
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=Adam(learning_rate=LEARNING_RATE),
metrics="accuracy",
)
# Train the model on a subset of the training data.
history = model.fit(
X_sub, y_sub, epochs=10, validation_split=0.2, batch_size=BATCH_SIZE
)
# Store the history of the trained model.
HISTORIES.append(history.history)
print(len(HISTORIES))
# Convert all the histories into Pandas data frame.
HISTORIES_DF = [pd.DataFrame(history) for history in HISTORIES]
# Loop over the model training curves
for index, (name, history) in enumerate(zip(BACKBONES_NAMES, HISTORIES_DF)):
print(f"Processing {name}") # Debug: Print the current backbone name
# Create a new figure for each backbone
plt.figure(figsize=(20, 5))
# Plot the loss curve in the first subplot
plt.subplot(1, 2, 1)
plt.title(f"{name} - Loss Curve")
plt.plot(history["loss"], label="Training Loss")
plt.plot(history["val_loss"], label="Validation Loss")
plt.plot(
[9, 9],
[min(history["loss"]), min(history["val_loss"])],
linestyle="--",
marker="*",
color="k",
alpha=0.7,
)
plt.text(
x=9.1,
y=np.mean([min(history["loss"]), min(history["val_loss"])]),
s=str(np.round(min(history["val_loss"]) - min(history["loss"]), 3)),
fontsize=10,
color="b",
)
# Plot a horizontal line at epoch 9 and annotate the values for the validation loss and training loss.
plt.axhline(min(history["loss"]), color="g", linestyle="--", alpha=0.5)
# Set the x- and y-labels, and the x- and y-limits
plt.xlabel("Epochs")
plt.ylabel("Cross Entropy Loss")
# plt.ylim([0.1, 0.3])
# plt.xlim([5, 10])
# Show the legend and grid
plt.legend()
plt.grid()
# Plot the accuracy curve in the second subplot
plt.subplot(1, 2, 2)
plt.title(f"{name} - Accuracy Curve")
plt.plot(history["accuracy"], label="Training Accuracy")
plt.plot(history["val_accuracy"], label="Validation Accuracy")
# Plot a vertical line at epoch 9 and annotate the difference between the validation accuracy and the training accuracy
plt.plot(
[9, 9],
[max(history["accuracy"]), max(history["val_accuracy"])],
linestyle="--",
marker="*",
color="k",
alpha=0.7,
)
plt.text(
x=9.1,
y=np.mean([max(history["accuracy"]), max(history["val_accuracy"])]),
s=str(np.round(max(history["accuracy"]) - max(history["val_accuracy"]), 3)),
fontsize=15,
color="b",
)
# Set the x- and y-labels, and the x- and y-limits
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
# plt.ylim([0.9,1.0])
# plt.xlim([0.5, 10])
# Show the legend and grid
plt.legend()
plt.grid()
# Show the plot
plt.show()
# Get the index of the Xception and ResNet152V2 Backbone
resnet50_index = BACKBONES_NAMES.index("ResNet50V2")
resnet15_index = BACKBONES_NAMES.index("ResNet152V2")
# Define the figure configuration
plt.figure(figsize=(25, 10))
# Subplot for training loss comparision
plt.subplot(2, 2, 1)
plt.title("Training Loss Comparison")
plt.plot(HISTORIES[resnet50_index]["loss"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["loss"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Cross Entropy Loss")
plt.legend()
plt.grid()
# Subplot for validation loss comparision
plt.subplot(2, 2, 2)
plt.title("Validation Loss Comparison")
plt.plot(HISTORIES[resnet50_index]["val_loss"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["val_loss"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Cross Entropy Loss")
plt.legend()
plt.grid()
# Subplot for training accuracy comparision
plt.subplot(2, 2, 3)
plt.title("Training Accuracy Comparison")
plt.plot(HISTORIES[resnet50_index]["accuracy"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["accuracy"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.grid()
# Subplot for validation accuracy comparision
plt.subplot(2, 2, 4)
plt.title("Validation Accuracy Comparison")
plt.plot(HISTORIES[resnet50_index]["val_accuracy"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["val_accuracy"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.grid()
plt.show()
res50 = ResNet50V2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
)
# Freeze the model weights
res50.trainable = True
res_baseline = Sequential(
[
res50,
GlobalAveragePooling2D(),
Dropout(0.5),
Dense(n_classes, activation="softmax"),
]
)
res_baseline.compile(
loss="sparse_categorical_crossentropy",
optimizer=Adam(learning_rate=LEARNING_RATE),
metrics=["accuracy"],
)
res_baseline.fit(
X_train,
y_train,
validation_data=(X_valid, y_valid),
epochs=50,
callbacks=[EarlyStopping(patience=3, restore_best_weights=True)],
batch_size=BATCH_SIZE,
)
xtest_loss, xtest_acc = res_baseline.evaluate(X_valid, y_valid)
print(f"ResNet50 Baseline Testing Loss : {xtest_loss}.")
print(f"ResNet50 Baseline Testing Accuracy : {xtest_acc}.")
print(res_baseline.summary())
def build_model(hp):
# Define all hyperparms
n_layers = hp.Choice("n_layers", [0, 2, 4])
dropout_rate = hp.Choice("rate", [0.2, 0.4, 0.5, 0.7])
n_units = hp.Choice("units", [64, 128, 256, 512])
# Mode architecture
model = Sequential(
[
res_baseline,
]
)
# Add hidden/top layers
for _ in range(n_layers):
model.add(Dense(n_units, activation="relu", kernel_initializer="he_normal"))
# Add Dropout Layer
model.add(Dropout(dropout_rate))
# Output Layer
model.add(Dense(4, activation="softmax"))
# Compile the model
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=Adam(LEARNING_RATE),
metrics=["accuracy"],
)
# Return model
return model
import keras_tuner as kt
# Initialize Random Searcher
random_searcher = kt.RandomSearch(
hypermodel=build_model,
objective="val_loss",
max_trials=10,
seed=42,
project_name="ResNet50",
loss="sparse_categorical_crossentropy",
)
# Start Searching
search = random_searcher.search(
X_train,
y_train,
validation_data=(X_valid, y_valid),
epochs=10,
batch_size=BATCH_SIZE,
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/472/129472577.ipynb
|
brain-tumor-classification-mri
|
sartajbhuvaji
|
[{"Id": 129472577, "ScriptId": 38260544, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13465016, "CreationDate": "05/14/2023 05:57:58", "VersionNumber": 2.0, "Title": "CNN using Keras Tuner", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 444.0, "LinesInsertedFromPrevious": 405.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 39.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185562927, "KernelVersionId": 129472577, "SourceDatasetVersionId": 1183165}]
|
[{"Id": 1183165, "DatasetId": 672377, "DatasourceVersionId": 1214258, "CreatorUserId": 3469060, "LicenseName": "CC0: Public Domain", "CreationDate": "05/24/2020 16:24:55", "VersionNumber": 2.0, "Title": "Brain Tumor Classification (MRI)", "Slug": "brain-tumor-classification-mri", "Subtitle": "Classify MRI images into four classes", "Description": "# Contribute to OpenSource\n##Repo: [GitHub](https://github.com/SartajBhuvaji/Brain-Tumor-Classification-Using-Deep-Learning-Algorithms)\n## Read Me: [Link](https://github.com/SartajBhuvaji/Brain-Tumor-Classification-Using-Deep-Learning-Algorithms/tree/master#contributing-to-the-project)\n\n\n# Abstract\nA Brain tumor is considered as one of the aggressive diseases, among children and adults. Brain tumors account for 85 to 90 percent of all primary Central Nervous System(CNS) tumors. Every year, around 11,700 people are diagnosed with a brain tumor. The 5-year survival rate for people with a cancerous brain or CNS tumor is approximately 34 percent for men and36 percent for women. Brain Tumors are classified as: Benign Tumor, Malignant Tumor, Pituitary Tumor, etc. Proper treatment, planning, and accurate diagnostics should be implemented to improve the life expectancy of the patients. The best technique to detect brain tumors is Magnetic Resonance Imaging (MRI). A huge amount of image data is generated through the scans. These images are examined by the radiologist. A manual examination can be error-prone due to the level of complexities involved in brain tumors and their properties.\n\nApplication of automated classification techniques using Machine Learning(ML) and Artificial Intelligence(AI)has consistently shown higher accuracy than manual classification. Hence, proposing a system performing detection and classification by using Deep Learning Algorithms using ConvolutionNeural Network (CNN), Artificial Neural Network (ANN), and TransferLearning (TL) would be helpful to doctors all around the world.\n\n### Context\n\nBrain Tumors are complex. There are a lot of abnormalities in the sizes and location of the brain tumor(s). This makes it really difficult for complete understanding of the nature of the tumor. Also, a professional Neurosurgeon is required for MRI analysis. Often times in developing countries the lack of skillful doctors and lack of knowledge about tumors makes it really challenging and time-consuming to generate reports from MRI\u2019. So an automated system on Cloud can solve this problem.\n\n\n### Definition\n\nTo Detect and Classify Brain Tumor using, CNN and TL; as an asset of Deep Learning and to examine the tumor position(segmentation).\n\n\n### Acknowledgements for Dataset.\n\nNavoneel Chakrabarty\nSwati Kanchan\n\n### Team\n\nSartaj Bhuvaji\nAnkita Kadam\nPrajakta Bhumkar\nSameer Dedge", "VersionNotes": "Automatic Update 2020-05-24", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 672377, "CreatorUserId": 3469060, "OwnerUserId": 3469060.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1183165.0, "CurrentDatasourceVersionId": 1214258.0, "ForumId": 686859, "Type": 2, "CreationDate": "05/24/2020 16:22:54", "LastActivityDate": "05/24/2020", "TotalViews": 302511, "TotalDownloads": 32508, "TotalVotes": 481, "TotalKernels": 255}]
|
[{"Id": 3469060, "UserName": "sartajbhuvaji", "DisplayName": "Sartaj", "RegisterDate": "07/16/2019", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from typing import Tuple, List
import tensorflow as tf
from tensorflow import image as tfi
from keras.models import Sequential, Model
from keras.layers import (
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
GlobalAveragePooling2D,
)
from keras.regularizers import l2
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.applications import ResNet50V2
from tensorflow.keras.applications import ResNet152V2
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import MobileNetV2
import cv2 as cv
from tqdm import tqdm
from IPython.display import clear_output as cls
import os
from glob import glob
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
cls()
image_path = (
"/kaggle/input/brain-tumor-classification-mri/Testing/glioma_tumor/image(4).jpg"
)
image = cv.imread(image_path)
plt.imshow(image)
plt.axis("off")
plt.show()
train_dir = "/kaggle/input/brain-tumor-classification-mri/Training"
valid_dir = "/kaggle/input/brain-tumor-classification-mri/Testing"
# Get all class names and count the number of classes
class_names = os.listdir(train_dir)
n_classes = len(class_names)
# Set some constants for the dataset
BATCH_SIZE = 32 # Number of samples in each batch during training
IMG_SIZE = 224 # Size of the image
AUTOTUNE = tf.data.AUTOTUNE # Set to optimize the buffer size automatically
LEARNING_RATE = 1e-3 # Learning rate for the optimizer used during model training
# Set the random seed for reproducibility
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
def load_image(image_path: str) -> tf.Tensor:
"""
The task of the function is to load the image present in the specified given image path. Loading the image the function also performedsome
preprocessing steps such as resizing and normalization.
Argument:
image_path(str) : This is a string which represents the location of the image file to be loaded.
Returns:
image(tf.Tensor) : This is the image which is loaded from the given image part in the form of a tensor.
"""
# Check if image path exists
assert os.path.exists(image_path), f"Invalid image path: {image_path}"
# Load the image
image = plt.imread(image_path)
# Resize the Image
image = tfi.resize(image, (IMG_SIZE, IMG_SIZE))
# Convert image data type to tf.float32
image = tf.cast(image, tf.float32)
# Normalize the image to bring pixel values between 0 - 1
image = image / 255.0
return image
def load_dataset(
root_path: str, class_names: list, batch_size: int = 32, buffer_size: int = 1000
) -> Tuple[np.ndarray, np.ndarray]:
"""
Load and preprocess images from the given root path and return them as numpy arrays.
Args:
root_path (str): Path to the root directory where all the subdirectories (class names) are present.
class_names (list): List of the names of all the subdirectories (class names).
batch_size (int): Batch size of the final dataset. Defaults to 32.
buffer_size (int): Buffer size to use when shuffling the data. Defaults to 1000.
Returns:
Two numpy arrays, one containing the images and the other containing their respective labels.
"""
# Collect total number of data samples
n_samples = sum(
[len(os.listdir(os.path.join(root_path, name))) for name in class_names]
)
# Create arrays to store images and labels
images = np.empty(shape=(n_samples, IMG_SIZE, IMG_SIZE, 3), dtype=np.float32)
labels = np.empty(shape=(n_samples, 1), dtype=np.int32)
# Loop over all the image file paths, load and store the images with respective labels
n_image = 0
for class_name in tqdm(class_names, desc="Loading"):
class_path = os.path.join(root_path, class_name)
for file_path in glob(os.path.join(class_path, "*")):
# Load the image
image = load_image(file_path)
# Assign label
label = class_names.index(class_name)
# Store the image and the respective label
images[n_image] = image
labels[n_image] = label
# Increment the number of images processed
n_image += 1
# Shuffle the data
indices = np.random.permutation(n_samples)
images = images[indices]
labels = labels[indices]
return images, labels
X_train, y_train = load_dataset(root_path=train_dir, class_names=class_names)
X_valid, y_valid = load_dataset(root_path=valid_dir, class_names=class_names)
X_train.shape, y_train.shape
def show_images(
images: np.ndarray,
labels: np.ndarray,
n_rows: int = 1,
n_cols: int = 5,
figsize: tuple = (25, 8),
model: tf.keras.Model = None,
) -> None:
"""
Plots a grid of random images and their corresponding labels, with an optional prediction from a given model.
Args:
images (np.ndarray): Array of images to plot.
labels (np.ndarray): Array of labels corresponding to the images.
n_rows (int): Number of rows in the plot grid. Default is 1.
n_cols (int): Number of columns in the plot grid. Default is 5.
figsize (tuple): A tuple specifying the size of the figure. Default is (25, 8).
model (tf.keras.Model): A Keras model object used to make predictions on the images. Default is None.
Returns:
None
"""
# Loop over each row of the plot
for row in range(n_rows):
# Create a new figure for each row
plt.figure(figsize=figsize)
# Generate a random index for each column in the row
rand_indices = np.random.choice(len(images), size=n_cols, replace=False)
# Loop over each column of the plot
for col, index in enumerate(rand_indices):
# Get the image and label at the random index
image = images[index]
label = class_names[int(labels[index])]
# If a model is provided, make a prediction on the image
if model:
prediction = model.predict(
np.expand_dims(tf.squeeze(image), axis=0), verbose=0
)[0]
label += f"\nPrediction: {class_names[np.argmax(prediction)]}"
# Plot the image and label
plt.subplot(1, n_cols, col + 1)
plt.imshow(image)
plt.title(label.title())
plt.axis("off")
# Show the row of images
plt.show()
show_images(images=X_train, labels=y_train, n_rows=3)
# Collect all backbones
BACKBONES = [
ResNet50V2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
ResNet152V2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
InceptionV3(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
Xception(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
MobileNetV2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
),
VGG16(input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False),
]
# Define all the backbone names. This will be later used during visualization
BACKBONES_NAMES = [
"ResNet50V2",
"ResNet152V2",
"InceptionV3",
"Xception",
"MobileNetV2",
"VGG16",
]
# Freeze the weights of all the backbones
for backbone in BACKBONES:
backbone.trainable = False
cls()
# Set the size of the subset
subset_size = 1000
# Generate a random subset of indices
subset_indices_train = np.random.choice(len(X_train), size=subset_size, replace=False)
# Use the indices to extract a subset of the training data
X_sub, y_sub = X_train[subset_indices_train], y_train[subset_indices_train]
# Initialize an empty list to hold the histories of each backbone architecture.
HISTORIES = []
# Loop over every backbone in the BACKBONES list.
for backbone in tqdm(BACKBONES, desc="Training Backbone"):
# Create the simplest model architecture using the current backbone.
model = Sequential(
[
backbone,
GlobalAveragePooling2D(),
Dropout(0.5),
Dense(n_classes, activation="softmax"),
]
)
# Compile the model with the specified loss function, optimizer, and metrics.
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=Adam(learning_rate=LEARNING_RATE),
metrics="accuracy",
)
# Train the model on a subset of the training data.
history = model.fit(
X_sub, y_sub, epochs=10, validation_split=0.2, batch_size=BATCH_SIZE
)
# Store the history of the trained model.
HISTORIES.append(history.history)
print(len(HISTORIES))
# Convert all the histories into Pandas data frame.
HISTORIES_DF = [pd.DataFrame(history) for history in HISTORIES]
# Loop over the model training curves
for index, (name, history) in enumerate(zip(BACKBONES_NAMES, HISTORIES_DF)):
print(f"Processing {name}") # Debug: Print the current backbone name
# Create a new figure for each backbone
plt.figure(figsize=(20, 5))
# Plot the loss curve in the first subplot
plt.subplot(1, 2, 1)
plt.title(f"{name} - Loss Curve")
plt.plot(history["loss"], label="Training Loss")
plt.plot(history["val_loss"], label="Validation Loss")
plt.plot(
[9, 9],
[min(history["loss"]), min(history["val_loss"])],
linestyle="--",
marker="*",
color="k",
alpha=0.7,
)
plt.text(
x=9.1,
y=np.mean([min(history["loss"]), min(history["val_loss"])]),
s=str(np.round(min(history["val_loss"]) - min(history["loss"]), 3)),
fontsize=10,
color="b",
)
# Plot a horizontal line at epoch 9 and annotate the values for the validation loss and training loss.
plt.axhline(min(history["loss"]), color="g", linestyle="--", alpha=0.5)
# Set the x- and y-labels, and the x- and y-limits
plt.xlabel("Epochs")
plt.ylabel("Cross Entropy Loss")
# plt.ylim([0.1, 0.3])
# plt.xlim([5, 10])
# Show the legend and grid
plt.legend()
plt.grid()
# Plot the accuracy curve in the second subplot
plt.subplot(1, 2, 2)
plt.title(f"{name} - Accuracy Curve")
plt.plot(history["accuracy"], label="Training Accuracy")
plt.plot(history["val_accuracy"], label="Validation Accuracy")
# Plot a vertical line at epoch 9 and annotate the difference between the validation accuracy and the training accuracy
plt.plot(
[9, 9],
[max(history["accuracy"]), max(history["val_accuracy"])],
linestyle="--",
marker="*",
color="k",
alpha=0.7,
)
plt.text(
x=9.1,
y=np.mean([max(history["accuracy"]), max(history["val_accuracy"])]),
s=str(np.round(max(history["accuracy"]) - max(history["val_accuracy"]), 3)),
fontsize=15,
color="b",
)
# Set the x- and y-labels, and the x- and y-limits
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
# plt.ylim([0.9,1.0])
# plt.xlim([0.5, 10])
# Show the legend and grid
plt.legend()
plt.grid()
# Show the plot
plt.show()
# Get the index of the Xception and ResNet152V2 Backbone
resnet50_index = BACKBONES_NAMES.index("ResNet50V2")
resnet15_index = BACKBONES_NAMES.index("ResNet152V2")
# Define the figure configuration
plt.figure(figsize=(25, 10))
# Subplot for training loss comparision
plt.subplot(2, 2, 1)
plt.title("Training Loss Comparison")
plt.plot(HISTORIES[resnet50_index]["loss"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["loss"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Cross Entropy Loss")
plt.legend()
plt.grid()
# Subplot for validation loss comparision
plt.subplot(2, 2, 2)
plt.title("Validation Loss Comparison")
plt.plot(HISTORIES[resnet50_index]["val_loss"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["val_loss"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Cross Entropy Loss")
plt.legend()
plt.grid()
# Subplot for training accuracy comparision
plt.subplot(2, 2, 3)
plt.title("Training Accuracy Comparison")
plt.plot(HISTORIES[resnet50_index]["accuracy"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["accuracy"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.grid()
# Subplot for validation accuracy comparision
plt.subplot(2, 2, 4)
plt.title("Validation Accuracy Comparison")
plt.plot(HISTORIES[resnet50_index]["val_accuracy"], label="ResNet50V2")
plt.plot(HISTORIES[resnet15_index]["val_accuracy"], label="ResNet152V2")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.grid()
plt.show()
res50 = ResNet50V2(
input_shape=(IMG_SIZE, IMG_SIZE, 3), weights="imagenet", include_top=False
)
# Freeze the model weights
res50.trainable = True
res_baseline = Sequential(
[
res50,
GlobalAveragePooling2D(),
Dropout(0.5),
Dense(n_classes, activation="softmax"),
]
)
res_baseline.compile(
loss="sparse_categorical_crossentropy",
optimizer=Adam(learning_rate=LEARNING_RATE),
metrics=["accuracy"],
)
res_baseline.fit(
X_train,
y_train,
validation_data=(X_valid, y_valid),
epochs=50,
callbacks=[EarlyStopping(patience=3, restore_best_weights=True)],
batch_size=BATCH_SIZE,
)
xtest_loss, xtest_acc = res_baseline.evaluate(X_valid, y_valid)
print(f"ResNet50 Baseline Testing Loss : {xtest_loss}.")
print(f"ResNet50 Baseline Testing Accuracy : {xtest_acc}.")
print(res_baseline.summary())
def build_model(hp):
# Define all hyperparms
n_layers = hp.Choice("n_layers", [0, 2, 4])
dropout_rate = hp.Choice("rate", [0.2, 0.4, 0.5, 0.7])
n_units = hp.Choice("units", [64, 128, 256, 512])
# Mode architecture
model = Sequential(
[
res_baseline,
]
)
# Add hidden/top layers
for _ in range(n_layers):
model.add(Dense(n_units, activation="relu", kernel_initializer="he_normal"))
# Add Dropout Layer
model.add(Dropout(dropout_rate))
# Output Layer
model.add(Dense(4, activation="softmax"))
# Compile the model
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=Adam(LEARNING_RATE),
metrics=["accuracy"],
)
# Return model
return model
import keras_tuner as kt
# Initialize Random Searcher
random_searcher = kt.RandomSearch(
hypermodel=build_model,
objective="val_loss",
max_trials=10,
seed=42,
project_name="ResNet50",
loss="sparse_categorical_crossentropy",
)
# Start Searching
search = random_searcher.search(
X_train,
y_train,
validation_data=(X_valid, y_valid),
epochs=10,
batch_size=BATCH_SIZE,
)
| false | 0 | 4,452 | 0 | 5,081 | 4,452 |
||
129472829
|
<jupyter_start><jupyter_text>Mall Customer Segmentation Data
### Context
This data set is created only for the learning purpose of the customer segmentation concepts , also known as market basket analysis . I will demonstrate this by using unsupervised ML technique (KMeans Clustering Algorithm) in the simplest form.
### Content
You are owing a supermarket mall and through membership cards , you have some basic data about your customers like Customer ID, age, gender, annual income and spending score.
Spending Score is something you assign to the customer based on your defined parameters like customer behavior and purchasing data.
**Problem Statement**
You own the mall and want to understand the customers like who can be easily converge [Target Customers] so that the sense can be given to marketing team and plan the strategy accordingly.
Kaggle dataset identifier: customer-segmentation-tutorial-in-python
<jupyter_code>import pandas as pd
df = pd.read_csv('customer-segmentation-tutorial-in-python/Mall_Customers.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CustomerID 200 non-null int64
1 Gender 200 non-null object
2 Age 200 non-null int64
3 Annual Income (k$) 200 non-null int64
4 Spending Score (1-100) 200 non-null int64
dtypes: int64(4), object(1)
memory usage: 7.9+ KB
<jupyter_text>Examples:
{
"CustomerID": 1,
"Gender": "Male",
"Age": 19,
"Annual Income (k$)": 15,
"Spending Score (1-100)": 39
}
{
"CustomerID": 2,
"Gender": "Male",
"Age": 21,
"Annual Income (k$)": 15,
"Spending Score (1-100)": 81
}
{
"CustomerID": 3,
"Gender": "Female",
"Age": 20,
"Annual Income (k$)": 16,
"Spending Score (1-100)": 6
}
{
"CustomerID": 4,
"Gender": "Female",
"Age": 23,
"Annual Income (k$)": 16,
"Spending Score (1-100)": 77
}
<jupyter_script># # Introduction
# In this project, I will be performing an unsupervised clustering of data on the customer's records. Customer segmentation is the practice of separating customers into groups that reflect similarities among customers in each cluster.
# Clustring is the process to finding groups of data points such data points in group will be similar to one another and different from data points in other groups.
# # Overview
# **Steps to solve the problem :**
# 1. Importing Libraries.
# 2. Exploration of data.
# 3. Data Visualization.
# 4. Clustering using K-Means.
# 5. Selection of Clusters.
# 6. Ploting the Cluster Boundry and Clusters.
# 7. 3D Plot of Clusters.
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
import warnings
warnings.filterwarnings("ignore")
# importing required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# reading dataset
df = pd.read_csv(
"/kaggle/input/customer-segmentation-tutorial-in-python/Mall_Customers.csv"
)
df.head()
# # EDA
# getting shape of data
df.shape
# getting some info
df.info()
# we do not need customer id so, drop it
df = df.drop(["CustomerID"], axis=1)
# checkin for null values
df.isnull().sum()
# Checking for duplicate row
df.duplicated().sum()
# Descriptive Analysis
df.describe()
# ### Data Types
# our dataset has both numerical and categorical features.
numerical_features = [
feature for feature in df.columns if df[feature].dtype not in ["o", "object"]
]
numerical_features
categorical_features = [
feature for feature in df.columns if df[feature].dtype in ["o", "object"]
]
categorical_features
# # Data Visualization
# # Univeriant Analysis
# ### Numerical Data
for feature in numerical_features:
plt.figure(figsize=(18, 6))
plt.subplot(1, 2, 1)
plt.title("Dustribution Plot")
sns.distplot(df[feature])
plt.subplot(1, 2, 2)
plt.title("Box Plot")
sns.boxplot(df[feature], palette="Set1_r")
plt.show()
# ### Categorical Data
plt.figure(figsize=(12, 4))
sns.countplot(data=df, y=df["Gender"])
plt.title(feature)
plt.show()
# # Biveriant Analysis
df.head()
# Grouping the data with respect to gender
df.groupby("Gender").agg(["count", "max", "min", "mean"])
df.groupby("Gender").agg(["count", "mean"]).plot(kind="bar")
# ### Analysing the relationship between each features with other.
plt.figure(figsize=(12, 6))
sns.scatterplot(
data=df, x="Age", y="Annual Income (k$)", hue="Gender", s=200, alpha=0.9
)
plt.figure(figsize=(12, 6))
sns.scatterplot(
data=df, x="Age", y="Spending Score (1-100)", hue="Gender", s=200, alpha=0.9
)
# # Multiveriant Analysis
# #### Finding the variance between features
var = df.var()
var
# #### Finding the correlation between features
cor = df.corr()
cor
# PLoting heat map to visualize corrleation
sns.heatmap(cor, annot=True)
plt.show()
# #### Pair Plot
sns.pairplot(df)
plt.show()
# # One hot encoding
new_df = pd.get_dummies(df)
new_df.head()
# # Model Building
from sklearn.cluster import KMeans
inertia = []
for n in range(1, 11):
algorithm = KMeans(
n_clusters=n,
init="k-means++",
n_init=10,
max_iter=300,
tol=0.0001,
random_state=111,
algorithm="elkan",
)
algorithm.fit(new_df)
inertia.append(algorithm.inertia_)
# The elbow method is a graphical representation of finding the optimal 'K' in a K-means clustering. Here we plot graph to find optimal numbers of cluster by uisng elbow method.
plt.figure(1, figsize=(15, 6))
plt.plot(np.arange(1, 11), inertia, "o")
plt.plot(np.arange(1, 11), inertia, "-", alpha=0.5)
plt.xlabel("Number of Clusters"), plt.ylabel("Inertia")
plt.show()
# #### Now, building a KMeans model with 6 cluster
clustring_model = KMeans(
n_clusters=6,
init="k-means++",
n_init=10,
max_iter=300,
tol=0.0001,
random_state=111,
algorithm="elkan",
)
clustring_model.fit(new_df)
labels3 = algorithm.labels_
centroids3 = algorithm.cluster_centers_
labels = clustring_model.predict(new_df)
labels
# #### Visulization final clusters with their output datapoints
plt.scatter(new_df.values[:, 0], new_df.values[:, 1], c=labels)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/472/129472829.ipynb
|
customer-segmentation-tutorial-in-python
|
vjchoudhary7
|
[{"Id": 129472829, "ScriptId": 38188829, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5850247, "CreationDate": "05/14/2023 06:00:59", "VersionNumber": 1.0, "Title": "Customer Segmentation | K-Means Clustering", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 178.0, "LinesInsertedFromPrevious": 178.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 185563363, "KernelVersionId": 129472829, "SourceDatasetVersionId": 74935}]
|
[{"Id": 74935, "DatasetId": 42674, "DatasourceVersionId": 77392, "CreatorUserId": 1790645, "LicenseName": "Other (specified in description)", "CreationDate": "08/11/2018 07:23:02", "VersionNumber": 1.0, "Title": "Mall Customer Segmentation Data", "Slug": "customer-segmentation-tutorial-in-python", "Subtitle": "Market Basket Analysis", "Description": "### Context\n\nThis data set is created only for the learning purpose of the customer segmentation concepts , also known as market basket analysis . I will demonstrate this by using unsupervised ML technique (KMeans Clustering Algorithm) in the simplest form. \n\n\n### Content\n\nYou are owing a supermarket mall and through membership cards , you have some basic data about your customers like Customer ID, age, gender, annual income and spending score. \nSpending Score is something you assign to the customer based on your defined parameters like customer behavior and purchasing data. \n\n**Problem Statement**\nYou own the mall and want to understand the customers like who can be easily converge [Target Customers] so that the sense can be given to marketing team and plan the strategy accordingly. \n\n\n### Acknowledgements\n\nFrom Udemy's Machine Learning A-Z course.\n\nI am new to Data science field and want to share my knowledge to others\n\nhttps://github.com/SteffiPeTaffy/machineLearningAZ/blob/master/Machine%20Learning%20A-Z%20Template%20Folder/Part%204%20-%20Clustering/Section%2025%20-%20Hierarchical%20Clustering/Mall_Customers.csv\n\n### Inspiration\n\nBy the end of this case study , you would be able to answer below questions. \n1- How to achieve customer segmentation using machine learning algorithm (KMeans Clustering) in Python in simplest way.\n2- Who are your target customers with whom you can start marketing strategy [easy to converse]\n3- How the marketing strategy works in real world", "VersionNotes": "Initial release", "TotalCompressedBytes": 3981.0, "TotalUncompressedBytes": 3981.0}]
|
[{"Id": 42674, "CreatorUserId": 1790645, "OwnerUserId": 1790645.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 74935.0, "CurrentDatasourceVersionId": 77392.0, "ForumId": 51177, "Type": 2, "CreationDate": "08/11/2018 07:23:02", "LastActivityDate": "08/11/2018", "TotalViews": 710472, "TotalDownloads": 132442, "TotalVotes": 1487, "TotalKernels": 1044}]
|
[{"Id": 1790645, "UserName": "vjchoudhary7", "DisplayName": "Vijay Choudhary", "RegisterDate": "04/05/2018", "PerformanceTier": 1}]
|
# # Introduction
# In this project, I will be performing an unsupervised clustering of data on the customer's records. Customer segmentation is the practice of separating customers into groups that reflect similarities among customers in each cluster.
# Clustring is the process to finding groups of data points such data points in group will be similar to one another and different from data points in other groups.
# # Overview
# **Steps to solve the problem :**
# 1. Importing Libraries.
# 2. Exploration of data.
# 3. Data Visualization.
# 4. Clustering using K-Means.
# 5. Selection of Clusters.
# 6. Ploting the Cluster Boundry and Clusters.
# 7. 3D Plot of Clusters.
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
import warnings
warnings.filterwarnings("ignore")
# importing required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# reading dataset
df = pd.read_csv(
"/kaggle/input/customer-segmentation-tutorial-in-python/Mall_Customers.csv"
)
df.head()
# # EDA
# getting shape of data
df.shape
# getting some info
df.info()
# we do not need customer id so, drop it
df = df.drop(["CustomerID"], axis=1)
# checkin for null values
df.isnull().sum()
# Checking for duplicate row
df.duplicated().sum()
# Descriptive Analysis
df.describe()
# ### Data Types
# our dataset has both numerical and categorical features.
numerical_features = [
feature for feature in df.columns if df[feature].dtype not in ["o", "object"]
]
numerical_features
categorical_features = [
feature for feature in df.columns if df[feature].dtype in ["o", "object"]
]
categorical_features
# # Data Visualization
# # Univeriant Analysis
# ### Numerical Data
for feature in numerical_features:
plt.figure(figsize=(18, 6))
plt.subplot(1, 2, 1)
plt.title("Dustribution Plot")
sns.distplot(df[feature])
plt.subplot(1, 2, 2)
plt.title("Box Plot")
sns.boxplot(df[feature], palette="Set1_r")
plt.show()
# ### Categorical Data
plt.figure(figsize=(12, 4))
sns.countplot(data=df, y=df["Gender"])
plt.title(feature)
plt.show()
# # Biveriant Analysis
df.head()
# Grouping the data with respect to gender
df.groupby("Gender").agg(["count", "max", "min", "mean"])
df.groupby("Gender").agg(["count", "mean"]).plot(kind="bar")
# ### Analysing the relationship between each features with other.
plt.figure(figsize=(12, 6))
sns.scatterplot(
data=df, x="Age", y="Annual Income (k$)", hue="Gender", s=200, alpha=0.9
)
plt.figure(figsize=(12, 6))
sns.scatterplot(
data=df, x="Age", y="Spending Score (1-100)", hue="Gender", s=200, alpha=0.9
)
# # Multiveriant Analysis
# #### Finding the variance between features
var = df.var()
var
# #### Finding the correlation between features
cor = df.corr()
cor
# PLoting heat map to visualize corrleation
sns.heatmap(cor, annot=True)
plt.show()
# #### Pair Plot
sns.pairplot(df)
plt.show()
# # One hot encoding
new_df = pd.get_dummies(df)
new_df.head()
# # Model Building
from sklearn.cluster import KMeans
inertia = []
for n in range(1, 11):
algorithm = KMeans(
n_clusters=n,
init="k-means++",
n_init=10,
max_iter=300,
tol=0.0001,
random_state=111,
algorithm="elkan",
)
algorithm.fit(new_df)
inertia.append(algorithm.inertia_)
# The elbow method is a graphical representation of finding the optimal 'K' in a K-means clustering. Here we plot graph to find optimal numbers of cluster by uisng elbow method.
plt.figure(1, figsize=(15, 6))
plt.plot(np.arange(1, 11), inertia, "o")
plt.plot(np.arange(1, 11), inertia, "-", alpha=0.5)
plt.xlabel("Number of Clusters"), plt.ylabel("Inertia")
plt.show()
# #### Now, building a KMeans model with 6 cluster
clustring_model = KMeans(
n_clusters=6,
init="k-means++",
n_init=10,
max_iter=300,
tol=0.0001,
random_state=111,
algorithm="elkan",
)
clustring_model.fit(new_df)
labels3 = algorithm.labels_
centroids3 = algorithm.cluster_centers_
labels = clustring_model.predict(new_df)
labels
# #### Visulization final clusters with their output datapoints
plt.scatter(new_df.values[:, 0], new_df.values[:, 1], c=labels)
|
[{"customer-segmentation-tutorial-in-python/Mall_Customers.csv": {"column_names": "[\"CustomerID\", \"Gender\", \"Age\", \"Annual Income (k$)\", \"Spending Score (1-100)\"]", "column_data_types": "{\"CustomerID\": \"int64\", \"Gender\": \"object\", \"Age\": \"int64\", \"Annual Income (k$)\": \"int64\", \"Spending Score (1-100)\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 200 entries, 0 to 199\nData columns (total 5 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CustomerID 200 non-null int64 \n 1 Gender 200 non-null object\n 2 Age 200 non-null int64 \n 3 Annual Income (k$) 200 non-null int64 \n 4 Spending Score (1-100) 200 non-null int64 \ndtypes: int64(4), object(1)\nmemory usage: 7.9+ KB\n", "summary": "{\"CustomerID\": {\"count\": 200.0, \"mean\": 100.5, \"std\": 57.879184513951124, \"min\": 1.0, \"25%\": 50.75, \"50%\": 100.5, \"75%\": 150.25, \"max\": 200.0}, \"Age\": {\"count\": 200.0, \"mean\": 38.85, \"std\": 13.96900733155888, \"min\": 18.0, \"25%\": 28.75, \"50%\": 36.0, \"75%\": 49.0, \"max\": 70.0}, \"Annual Income (k$)\": {\"count\": 200.0, \"mean\": 60.56, \"std\": 26.264721165271244, \"min\": 15.0, \"25%\": 41.5, \"50%\": 61.5, \"75%\": 78.0, \"max\": 137.0}, \"Spending Score (1-100)\": {\"count\": 200.0, \"mean\": 50.2, \"std\": 25.823521668370173, \"min\": 1.0, \"25%\": 34.75, \"50%\": 50.0, \"75%\": 73.0, \"max\": 99.0}}", "examples": "{\"CustomerID\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"Gender\":{\"0\":\"Male\",\"1\":\"Male\",\"2\":\"Female\",\"3\":\"Female\"},\"Age\":{\"0\":19,\"1\":21,\"2\":20,\"3\":23},\"Annual Income (k$)\":{\"0\":15,\"1\":15,\"2\":16,\"3\":16},\"Spending Score (1-100)\":{\"0\":39,\"1\":81,\"2\":6,\"3\":77}}"}}]
| true | 1 |
<start_data_description><data_path>customer-segmentation-tutorial-in-python/Mall_Customers.csv:
<column_names>
['CustomerID', 'Gender', 'Age', 'Annual Income (k$)', 'Spending Score (1-100)']
<column_types>
{'CustomerID': 'int64', 'Gender': 'object', 'Age': 'int64', 'Annual Income (k$)': 'int64', 'Spending Score (1-100)': 'int64'}
<dataframe_Summary>
{'CustomerID': {'count': 200.0, 'mean': 100.5, 'std': 57.879184513951124, 'min': 1.0, '25%': 50.75, '50%': 100.5, '75%': 150.25, 'max': 200.0}, 'Age': {'count': 200.0, 'mean': 38.85, 'std': 13.96900733155888, 'min': 18.0, '25%': 28.75, '50%': 36.0, '75%': 49.0, 'max': 70.0}, 'Annual Income (k$)': {'count': 200.0, 'mean': 60.56, 'std': 26.264721165271244, 'min': 15.0, '25%': 41.5, '50%': 61.5, '75%': 78.0, 'max': 137.0}, 'Spending Score (1-100)': {'count': 200.0, 'mean': 50.2, 'std': 25.823521668370173, 'min': 1.0, '25%': 34.75, '50%': 50.0, '75%': 73.0, 'max': 99.0}}
<dataframe_info>
RangeIndex: 200 entries, 0 to 199
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CustomerID 200 non-null int64
1 Gender 200 non-null object
2 Age 200 non-null int64
3 Annual Income (k$) 200 non-null int64
4 Spending Score (1-100) 200 non-null int64
dtypes: int64(4), object(1)
memory usage: 7.9+ KB
<some_examples>
{'CustomerID': {'0': 1, '1': 2, '2': 3, '3': 4}, 'Gender': {'0': 'Male', '1': 'Male', '2': 'Female', '3': 'Female'}, 'Age': {'0': 19, '1': 21, '2': 20, '3': 23}, 'Annual Income (k$)': {'0': 15, '1': 15, '2': 16, '3': 16}, 'Spending Score (1-100)': {'0': 39, '1': 81, '2': 6, '3': 77}}
<end_description>
| 1,366 | 2 | 1,997 | 1,366 |
129405964
|
# # Visa For Lisa
# This project is about a bank (Galaxy Bank) whose management wants to explore converting some of its deposit customers to become personal loan customers (while retaining them as depositors). In short, they want to upsell customers to purchase more banking products from Galaxy Bank.
# The bank has had some previous success in upselling to its deposit clients. Still, unfortunately, not all clients offered a loan by Galaxy Bank accept it and become a loan customers. The bank's campaign last year for its customers showed a healthy loan acceptance conversion rate of over 9%. This data has encouraged the marketing department to devise campaigns with better target marketing to increase the success ratio with a minimal budget.
# The bank wants to predict better and identify who will accept loans offered to potential loan customers. It will help make their marketing efforts more effective with higher conversion rates.
# **My mission is to help Galaxy Bank improve its marketing conversion rates by allowing them to target and predict which of their deposit clients are most likely to accept a loan offer from the bank. I will be meeting with the marketing leadership at the bank.**
# # Technical specification
# Implement a multi-variable linear regression model on a large and complex data set
# Analyze and evaluate the implications of your model on real-life users
# Analyze and evaluate the risk to the business of the implications, assumptions, and decisions in the model
# 1. Data Collecting / Cleaning
# 2. Data Exploration
# 3. Data Visualization
# 4. Machine Learning
# 5. Communication
# # Import Libraries
# I imported some necessary modules for this project
# 1. numpy for store data
# 2. pandas to manipulate data
# 3. matplotlib and seaborn for vizualize data
# 4. sklearn for preprocessing data and make predictions
# 5. scipy for statistic analysis
# 5. pickle to store data for production
# 6. warnigns you know for what:))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from scipy.stats import shapiro
import pickle
import warnings
warnings.filterwarnings("ignore")
# # Load Dataset
# *Every project in Data Science after importing necessary modules started with loading data*
def load_dataset():
a = pd.read_csv("Visa_For_Lisa_Loan_Modelling.csv")
return a
data = load_dataset()
# # Collecting and Cleaning Data
# 1. ID: Unique identifier for each record or individual in the dataset.
# 2. Age: Age of the individual in years.
# 3. Experience: Number of years of work experience.
# 4. Income: Annual income of the individual in dollars.
# 5. ZIP Code: ZIP code of the individual's address.
# 6. Family: Number of family members.
# 7. CCAvg: Average spending on credit cards per month.
# 8. Education: Level of education attained by the individual (1: Undergraduate, 2: Graduate, 3: Advanced/Professional).
# 9, Mortgage: Value of the mortgage, if any, on the individual's property.
# 10. Personal Loan: Indicates whether the individual accepted a personal loan offer (1: Yes, 0: No).
# 11. Securities Account: Indicates whether the individual has a securities account with the bank (1: Yes, 0: No).
# 12. CD Account: Indicates whether the individual has a certificate of deposit (CD) account with the bank (1: Yes, 0: No).
# 13. Online: Indicates whether the individual uses online banking services (1: Yes, 0: No).
# 14. CreditCard: Indicates whether the individual has a credit card issued by the bank (1: Yes, 0: No).
# **These columns provide information about various attributes of individuals, their financial behavior, and the products/services they use.**
def collecting_and_cleaning_data(data):
checking_all_types = (
data.info()
) # There all columns int and float type that's good but we should check some columns which we need to check
checking_age = data["Age"].unique()
checking_expirience = data["Experience"].unique()
checking_mortgage = data["Mortgage"].unique()
checking_some_likely_data = data.iloc[:, 9:].values
print(f"Checking some likely data: {np.unique(checking_some_likely_data)}")
print(
f"Nan values is equal = {data.isnull().sum().sum()}\n\nNan values not in the Data:\n{data.isnull().all()}"
)
return data
data_was_cleaned = collecting_and_cleaning_data(data)
data_was_cleaned
# # Data Exploration
# **Exploration data will give you really understand data**
# The "data_exploration" function provides a quick and concise overview of the dataset.
# The shape of the data reveals the size and structure of the dataset, indicating the number of rows and columns.
# The column names help identify the different attributes and variables present in the dataset.
# The descriptive statistics give us a summary of the dataset's numerical features, such as count, mean, standard deviation, minimum, maximum, and quartiles.
# The function is a useful tool for initial data exploration and understanding the dataset before diving into further analysis.
def data_exploration(data):
print(f"Shape of the Data: {data.shape}\n\n")
print(f"Columns of the Data: \n\n{[i for i in data.columns]}\n\n")
print(f"Describe of the Data: \n{data.describe()}")
return data.head()
summarize_data = data_exploration(data)
summarize_data
# # Data Visualization
# The first bar plot shows the frequency of individuals using the banks' system based on their age. It helps identify age groups that interact more frequently with the banks' system.
# The pie chart displays the mean experience level per age group. It highlights the distribution of experience levels among individuals aged 26 and above, excluding those with a negative experience value.
def data_analysis_part_one(data):
count_age = data.groupby(by=["Age"]).count()
plt.figure(figsize=(14, 10))
sns.barplot(data=count_age, x=count_age.index, y=count_age["ID"])
plt.title("Often using the banks system per age")
plt.show()
s = data.groupby(by=["Age"]).mean()
s1 = [i for i in s["Experience"] if i > 0]
s2 = [i for i in s.index if i >= 26]
plt.figure(figsize=(19, 19))
plt.pie(s1, labels=s2, autopct="%1.1f%%")
plt.title("Mean of the Expirience per Age")
plt.legend(loc="upper left")
plt.show()
first_part = data_analysis_part_one(data)
first_part
# The bar plot illustrates the top 10 most active zip codes based on the frequency of occurrences. It provides insights into the areas with the highest engagement with the analyzed system.
# The histogram presents the distribution of age within different levels of education. It visualizes the age composition across various educational backgrounds, providing an understanding of the demographic makeup of the dataset.
def data_analysis_part_two(data):
just = data.groupby(by=["ZIP Code"]).count()
just_count = [i for i in just["ID"] if i > 51]
take_top = just[(just["ID"]) > 51]
plt.figure(figsize=(15, 14))
sns.barplot(data=take_top, x=take_top.index, y=take_top.ID)
plt.title("Often Top 10 active zip codes")
plt.show()
data["Age"].hist(data["Education"], rwidth=0.91, figsize=(14, 11))
plt.show()
second_part = data_analysis_part_two(data)
second_part
# The function generates four histograms, each representing the distribution of average spending on credit cards per month for different family sizes (Family 1, Family 2, Family 3, and Family 4). This visualization helps identify any variations or patterns in credit card usage based on family size.
# The scatter plot illustrates the relationship between mortgage uptake and age. It shows the age distribution of individuals who have taken a mortgage from the bank, providing insights into the age demographics of mortgage borrowers.
def data_analysis_part_three(data):
res = data.pivot(index="ID", columns="Family", values="CCAvg")
one_fam = res[1]
two_fam = res[2]
three_fam = res[3]
four_fam = res[4]
fig, axs = plt.subplots(1, 4, sharey=True, tight_layout=True, figsize=(20, 15))
axs[0].hist(one_fam)
axs[0].set_title("Average spending on credit cards per month Family 1")
axs[1].hist(two_fam)
axs[1].set_title("Average spending on credit cards per month Family 2")
axs[2].hist(three_fam)
axs[2].set_title("Average spending on credit cards per month Family 3")
axs[3].hist(four_fam)
axs[3].set_title("Average spending on credit cards per month Family 4")
plt.show()
mortgage = data.pivot_table(
index="ID", columns="Age", values="Mortgage", fill_value=0
)
se = [i for i in mortgage if i > 0]
plt.figure(figsize=(14, 11))
plt.scatter(se, mortgage.columns)
plt.title("Taking Mortgage from the bank")
plt.xlabel("Taking Mortgage")
plt.ylabel("Age")
plt.show()
third_part = data_analysis_part_three(data)
third_part
# In this function first four plots to show 4 family categories one and zero. This visualization helps identify any differences in personal loan adoption across family sizes.
# An example visualization is provided for Family 1, showing two overlaid histograms
def data_analysis_part_four(data):
jo = data.pivot_table(
index="ID", columns="Family", values="Personal Loan", fill_value=0
)
jo.hist(grid=False, figsize=(14, 11))
plt.show()
f1 = data[(data["Family"] == 1)]
count_f1 = len(
[i for i in f1["Personal Loan"] if i > 0]
) # 107 has taken Personal Loan from this bank (Families 1)
count_f10 = len(
[i for i in f1["Personal Loan"] if i == 0]
) # 1365 did not take a personal loan from this bank (Families 1)
f2 = data[(data["Family"] == 2)]
count_f2 = len(
[i for i in f2["Personal Loan"] if i > 0]
) # 106 has taken Personal Loan from this bank (Families 2)
count_f20 = len(
[i for i in f2["Personal Loan"] if i == 0]
) # 1190 did not take a personal loan from this bank (Families 2)
f3 = data[(data["Family"]) == 3]
count_f3 = len(
[i for i in f3["Personal Loan"] if i > 0]
) # 133 has taken Personal Loan from this bank (Families 3)
count_f30 = len(
[i for i in f3["Personal Loan"] if i == 0]
) # 877 did not take a personal loan from this bank (Families 3)
f4 = data[(data["Family"] == 4)]
count_f4 = len(
[i for i in f4["Personal Loan"] if i > 0]
) # 134 has taken a personal loan from this bank (Families 4)
count_f40 = len(
[i for i in f4["Personal Loan"] if i == 0]
) # 1088 did not take a personal loan from this bank (Families 4)
plt.figure(figsize=(14, 11))
plt.hist([i for i in f1["Personal Loan"] if i > 0])
plt.hist([i for i in f1["Personal Loan"] if i == 0])
plt.title("Example Vizualization (Families 1)")
plt.legend(["Has taken PL", "Didn't take PL"])
plt.show()
part_fourth = data_analysis_part_four(data)
part_fourth
# The function calculates the count of customers who have a securities account within the age range of 20 to 50 years. This count helps understand the prevalence of securities accounts among customers in this age group.
# The function also visualizes the relationship between age and annual income for customers aged 20 to 70 years. The bar plot showcases the annual income for different age groups, allowing for an understanding of income patterns across age ranges.
def data_analysis_part_five(data):
tt = data[(data["Age"] > 20) & (data["Age"] < 50)]
tt1 = len(
[i for i in tt["Securities Account"] if i > 0]
) # 316 Has a securities account, 20 to 50 years customers
for_count = tt.groupby(by=["Age"]).count()
plt.figure(figsize=(14, 11))
sns.barplot(data=for_count, x=for_count.index, y=for_count["Securities Account"])
plt.title("Has a securities account, 20 to 50 years customers from this bank")
plt.show()
ex1 = data[(data["Age"] > 20) & (data["Age"] < 70)]
plt.figure(figsize=(14, 11))
plt.bar(ex1["Age"], ex1["Income"])
plt.title("Annual income of the customers in this bank")
plt.xlabel("Age")
plt.ylabel("Income")
plt.show()
fiveth_part = data_analysis_part_five(data)
fiveth_part
# How could it be without vizualization: Does the customer use Internet banking facilities or not. And also without vizualization does clients use credit card or not
def data_analysis_six(data):
use = [i for i in data["Online"] if i > 0]
not_use = [i for i in data["Online"] if i == 0]
plt.figure(figsize=(14, 12))
plt.hist(use)
plt.hist(not_use)
plt.title("Does the customer use Internet banking facilities")
plt.ylabel("Number of customers")
plt.legend(["use", "not use"])
use_cr = [i for i in data["CreditCard"] if i > 0]
not_use_cr = [i for i in data["CreditCard"] if i == 0]
plt.figure(figsize=(14, 12))
plt.hist(use_cr)
plt.hist(not_use_cr)
plt.title("Does the customer use a credit card issued by Universal Bank")
plt.legend(["Use Credit Card", "Not Use Credit Card"])
plt.ylabel("Number of Customers")
plt.show()
sixth_part = data_analysis_six(data)
sixth_part
# Before starting Machine learning process, we need to see correlations between each data.
# **As we can see Age and Experience columns: They are highly correlated**
# The last vizualization can verify it.
def correlation(data):
plt.figure(figsize=(14, 12))
plt.title("Correlation between each column of data")
sns.heatmap(data.corr(), annot=True)
plt.show() # as we can see Age expirience does have so more correlations
x = data.iloc[:, 1].values
y = data.iloc[:, 2].values
plt.figure(figsize=(14, 12))
plt.title("Checking correlation between Age and Experience")
plt.scatter(x, y)
plt.xlabel("Age")
plt.ylabel("Experience")
res = correlation(
data
) # Our model will not work with it because no one of the X features must have small correlations
res
# Before started predicting, each Data Scientist must know distribution of each columns, it is useful when x feature doesn't have a high correlation, then accuracy can be a high
# **But X feature no one column doesn't have normal distribution, So it's not necessary all times**
def checking_distribution_of_x(data):
X = data.iloc[:, 1:13]
res = {}
for i in X:
res2 = shapiro(X[i])
res[i] = {"statistic": res2.statistic, "p-value": round(res2.pvalue, 100)}
display(
pd.DataFrame(res)
) # as we can see there is no normal distribution, So we can do jsut analyze
all_results = checking_distribution_of_x(data)
all_results
# X features which we need to give to predict does have a high numbers, So we need to Standartized it
def preprocess_data(data):
scaler = MinMaxScaler()
X = data[
[
"Age",
"Experience",
"Income",
"Family",
"CCAvg",
"Education",
"Mortgage",
"Securities Account",
"CD Account",
"Online",
"CreditCard",
]
]
XX = scaler.fit_transform(X)
return XX
X = preprocess_data(data)
X
# After preprocessing data we need to predict it, but as we can see accuracy of this model isn't high. It cannot be useful for production. Let's try another way
def predicting_by_model(data):
y = data["Personal Loan"].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
model = LinearRegression(n_jobs=1)
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
return f"Accuracy of predict by using just linear regression model: {r2_score(y_test, y_predict)}"
predict = predicting_by_model(data)
predict
# **So, our result is not better, let's try optimization algorithm which can help us to better predict**
# Gradinet Descent! Yes! It can be useful when we are going to make accuracy better and correct, Let's try this way.
# OOhhh! 90.4% accuracy! It's Good, we can save it for production and inform our team in bank
def gradient_descent(X, y, learning_rate, epoch):
w = np.zeros(X.shape[1])
b = 0
for _ in range(epoch):
y_pred = np.dot(X, w) + b
dw = (1 / len(y)) * np.dot(X.T, (y_pred - y))
db = (1 / len(y)) * np.sum(y_pred - y)
w -= learning_rate * dw
b -= learning_rate * db
return w, b
X = data[
[
"Age",
"Experience",
"Income",
"Family",
"CCAvg",
"Education",
"Mortgage",
"Securities Account",
"CD Account",
"Online",
"CreditCard",
]
]
y = data["Personal Loan"].values
X = np.c_[np.ones(X.shape[0]), X]
learning_rate = 0.01
epoch = 136
optimized_model = gradient_descent(X, y, learning_rate, epoch)
w, b = optimized_model
def predict(X, w, b, threshold=0.5):
y_pred = np.dot(X, w) + b
y_pred_binary = np.where(y_pred >= threshold, 1, 0)
return y_pred_binary
y_pred = predict(X, w, b)
accuracy = np.mean(y_pred == y)
print("Accuracy:", accuracy)
# After getting enough and good result in this optimization algorithm, we need to save it and show to our team and take a raise!
def store_model_for_production(optmized_model):
with open("best_accuracy.pkl", "wb") as f:
pickle.dump(optmized_model, f)
# The test can be useful for testing, simply because we don't have a tester :))
# I think this is the best result I've ever seen!
with open("best_accuracy.pkl", "rb") as f:
model = pickle.load(f)
w, b = model
def take_model_for_test(X, w, b, threshold=0.5):
X_bias = np.c_[np.zeros(X.shape[0]), X]
y_pred = np.dot(X_bias, w) + b
binary_pred = np.where(y_pred >= threshold, 1, 0)
return binary_pred
input_for_predict = np.array([[1, 25, 1, 49, 91107, 4, 1.6, 1, 0, 0, 1]])
result = take_model_for_test(input_for_predict, w, b)
output = "yes" if result[0] == 1 else "no"
print(output)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/405/129405964.ipynb
| null | null |
[{"Id": 129405964, "ScriptId": 38475640, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11760991, "CreationDate": "05/13/2023 14:08:17", "VersionNumber": 1.0, "Title": "Visa For Lisa", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 369.0, "LinesInsertedFromPrevious": 369.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Visa For Lisa
# This project is about a bank (Galaxy Bank) whose management wants to explore converting some of its deposit customers to become personal loan customers (while retaining them as depositors). In short, they want to upsell customers to purchase more banking products from Galaxy Bank.
# The bank has had some previous success in upselling to its deposit clients. Still, unfortunately, not all clients offered a loan by Galaxy Bank accept it and become a loan customers. The bank's campaign last year for its customers showed a healthy loan acceptance conversion rate of over 9%. This data has encouraged the marketing department to devise campaigns with better target marketing to increase the success ratio with a minimal budget.
# The bank wants to predict better and identify who will accept loans offered to potential loan customers. It will help make their marketing efforts more effective with higher conversion rates.
# **My mission is to help Galaxy Bank improve its marketing conversion rates by allowing them to target and predict which of their deposit clients are most likely to accept a loan offer from the bank. I will be meeting with the marketing leadership at the bank.**
# # Technical specification
# Implement a multi-variable linear regression model on a large and complex data set
# Analyze and evaluate the implications of your model on real-life users
# Analyze and evaluate the risk to the business of the implications, assumptions, and decisions in the model
# 1. Data Collecting / Cleaning
# 2. Data Exploration
# 3. Data Visualization
# 4. Machine Learning
# 5. Communication
# # Import Libraries
# I imported some necessary modules for this project
# 1. numpy for store data
# 2. pandas to manipulate data
# 3. matplotlib and seaborn for vizualize data
# 4. sklearn for preprocessing data and make predictions
# 5. scipy for statistic analysis
# 5. pickle to store data for production
# 6. warnigns you know for what:))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from scipy.stats import shapiro
import pickle
import warnings
warnings.filterwarnings("ignore")
# # Load Dataset
# *Every project in Data Science after importing necessary modules started with loading data*
def load_dataset():
a = pd.read_csv("Visa_For_Lisa_Loan_Modelling.csv")
return a
data = load_dataset()
# # Collecting and Cleaning Data
# 1. ID: Unique identifier for each record or individual in the dataset.
# 2. Age: Age of the individual in years.
# 3. Experience: Number of years of work experience.
# 4. Income: Annual income of the individual in dollars.
# 5. ZIP Code: ZIP code of the individual's address.
# 6. Family: Number of family members.
# 7. CCAvg: Average spending on credit cards per month.
# 8. Education: Level of education attained by the individual (1: Undergraduate, 2: Graduate, 3: Advanced/Professional).
# 9, Mortgage: Value of the mortgage, if any, on the individual's property.
# 10. Personal Loan: Indicates whether the individual accepted a personal loan offer (1: Yes, 0: No).
# 11. Securities Account: Indicates whether the individual has a securities account with the bank (1: Yes, 0: No).
# 12. CD Account: Indicates whether the individual has a certificate of deposit (CD) account with the bank (1: Yes, 0: No).
# 13. Online: Indicates whether the individual uses online banking services (1: Yes, 0: No).
# 14. CreditCard: Indicates whether the individual has a credit card issued by the bank (1: Yes, 0: No).
# **These columns provide information about various attributes of individuals, their financial behavior, and the products/services they use.**
def collecting_and_cleaning_data(data):
checking_all_types = (
data.info()
) # There all columns int and float type that's good but we should check some columns which we need to check
checking_age = data["Age"].unique()
checking_expirience = data["Experience"].unique()
checking_mortgage = data["Mortgage"].unique()
checking_some_likely_data = data.iloc[:, 9:].values
print(f"Checking some likely data: {np.unique(checking_some_likely_data)}")
print(
f"Nan values is equal = {data.isnull().sum().sum()}\n\nNan values not in the Data:\n{data.isnull().all()}"
)
return data
data_was_cleaned = collecting_and_cleaning_data(data)
data_was_cleaned
# # Data Exploration
# **Exploration data will give you really understand data**
# The "data_exploration" function provides a quick and concise overview of the dataset.
# The shape of the data reveals the size and structure of the dataset, indicating the number of rows and columns.
# The column names help identify the different attributes and variables present in the dataset.
# The descriptive statistics give us a summary of the dataset's numerical features, such as count, mean, standard deviation, minimum, maximum, and quartiles.
# The function is a useful tool for initial data exploration and understanding the dataset before diving into further analysis.
def data_exploration(data):
print(f"Shape of the Data: {data.shape}\n\n")
print(f"Columns of the Data: \n\n{[i for i in data.columns]}\n\n")
print(f"Describe of the Data: \n{data.describe()}")
return data.head()
summarize_data = data_exploration(data)
summarize_data
# # Data Visualization
# The first bar plot shows the frequency of individuals using the banks' system based on their age. It helps identify age groups that interact more frequently with the banks' system.
# The pie chart displays the mean experience level per age group. It highlights the distribution of experience levels among individuals aged 26 and above, excluding those with a negative experience value.
def data_analysis_part_one(data):
count_age = data.groupby(by=["Age"]).count()
plt.figure(figsize=(14, 10))
sns.barplot(data=count_age, x=count_age.index, y=count_age["ID"])
plt.title("Often using the banks system per age")
plt.show()
s = data.groupby(by=["Age"]).mean()
s1 = [i for i in s["Experience"] if i > 0]
s2 = [i for i in s.index if i >= 26]
plt.figure(figsize=(19, 19))
plt.pie(s1, labels=s2, autopct="%1.1f%%")
plt.title("Mean of the Expirience per Age")
plt.legend(loc="upper left")
plt.show()
first_part = data_analysis_part_one(data)
first_part
# The bar plot illustrates the top 10 most active zip codes based on the frequency of occurrences. It provides insights into the areas with the highest engagement with the analyzed system.
# The histogram presents the distribution of age within different levels of education. It visualizes the age composition across various educational backgrounds, providing an understanding of the demographic makeup of the dataset.
def data_analysis_part_two(data):
just = data.groupby(by=["ZIP Code"]).count()
just_count = [i for i in just["ID"] if i > 51]
take_top = just[(just["ID"]) > 51]
plt.figure(figsize=(15, 14))
sns.barplot(data=take_top, x=take_top.index, y=take_top.ID)
plt.title("Often Top 10 active zip codes")
plt.show()
data["Age"].hist(data["Education"], rwidth=0.91, figsize=(14, 11))
plt.show()
second_part = data_analysis_part_two(data)
second_part
# The function generates four histograms, each representing the distribution of average spending on credit cards per month for different family sizes (Family 1, Family 2, Family 3, and Family 4). This visualization helps identify any variations or patterns in credit card usage based on family size.
# The scatter plot illustrates the relationship between mortgage uptake and age. It shows the age distribution of individuals who have taken a mortgage from the bank, providing insights into the age demographics of mortgage borrowers.
def data_analysis_part_three(data):
res = data.pivot(index="ID", columns="Family", values="CCAvg")
one_fam = res[1]
two_fam = res[2]
three_fam = res[3]
four_fam = res[4]
fig, axs = plt.subplots(1, 4, sharey=True, tight_layout=True, figsize=(20, 15))
axs[0].hist(one_fam)
axs[0].set_title("Average spending on credit cards per month Family 1")
axs[1].hist(two_fam)
axs[1].set_title("Average spending on credit cards per month Family 2")
axs[2].hist(three_fam)
axs[2].set_title("Average spending on credit cards per month Family 3")
axs[3].hist(four_fam)
axs[3].set_title("Average spending on credit cards per month Family 4")
plt.show()
mortgage = data.pivot_table(
index="ID", columns="Age", values="Mortgage", fill_value=0
)
se = [i for i in mortgage if i > 0]
plt.figure(figsize=(14, 11))
plt.scatter(se, mortgage.columns)
plt.title("Taking Mortgage from the bank")
plt.xlabel("Taking Mortgage")
plt.ylabel("Age")
plt.show()
third_part = data_analysis_part_three(data)
third_part
# In this function first four plots to show 4 family categories one and zero. This visualization helps identify any differences in personal loan adoption across family sizes.
# An example visualization is provided for Family 1, showing two overlaid histograms
def data_analysis_part_four(data):
jo = data.pivot_table(
index="ID", columns="Family", values="Personal Loan", fill_value=0
)
jo.hist(grid=False, figsize=(14, 11))
plt.show()
f1 = data[(data["Family"] == 1)]
count_f1 = len(
[i for i in f1["Personal Loan"] if i > 0]
) # 107 has taken Personal Loan from this bank (Families 1)
count_f10 = len(
[i for i in f1["Personal Loan"] if i == 0]
) # 1365 did not take a personal loan from this bank (Families 1)
f2 = data[(data["Family"] == 2)]
count_f2 = len(
[i for i in f2["Personal Loan"] if i > 0]
) # 106 has taken Personal Loan from this bank (Families 2)
count_f20 = len(
[i for i in f2["Personal Loan"] if i == 0]
) # 1190 did not take a personal loan from this bank (Families 2)
f3 = data[(data["Family"]) == 3]
count_f3 = len(
[i for i in f3["Personal Loan"] if i > 0]
) # 133 has taken Personal Loan from this bank (Families 3)
count_f30 = len(
[i for i in f3["Personal Loan"] if i == 0]
) # 877 did not take a personal loan from this bank (Families 3)
f4 = data[(data["Family"] == 4)]
count_f4 = len(
[i for i in f4["Personal Loan"] if i > 0]
) # 134 has taken a personal loan from this bank (Families 4)
count_f40 = len(
[i for i in f4["Personal Loan"] if i == 0]
) # 1088 did not take a personal loan from this bank (Families 4)
plt.figure(figsize=(14, 11))
plt.hist([i for i in f1["Personal Loan"] if i > 0])
plt.hist([i for i in f1["Personal Loan"] if i == 0])
plt.title("Example Vizualization (Families 1)")
plt.legend(["Has taken PL", "Didn't take PL"])
plt.show()
part_fourth = data_analysis_part_four(data)
part_fourth
# The function calculates the count of customers who have a securities account within the age range of 20 to 50 years. This count helps understand the prevalence of securities accounts among customers in this age group.
# The function also visualizes the relationship between age and annual income for customers aged 20 to 70 years. The bar plot showcases the annual income for different age groups, allowing for an understanding of income patterns across age ranges.
def data_analysis_part_five(data):
tt = data[(data["Age"] > 20) & (data["Age"] < 50)]
tt1 = len(
[i for i in tt["Securities Account"] if i > 0]
) # 316 Has a securities account, 20 to 50 years customers
for_count = tt.groupby(by=["Age"]).count()
plt.figure(figsize=(14, 11))
sns.barplot(data=for_count, x=for_count.index, y=for_count["Securities Account"])
plt.title("Has a securities account, 20 to 50 years customers from this bank")
plt.show()
ex1 = data[(data["Age"] > 20) & (data["Age"] < 70)]
plt.figure(figsize=(14, 11))
plt.bar(ex1["Age"], ex1["Income"])
plt.title("Annual income of the customers in this bank")
plt.xlabel("Age")
plt.ylabel("Income")
plt.show()
fiveth_part = data_analysis_part_five(data)
fiveth_part
# How could it be without vizualization: Does the customer use Internet banking facilities or not. And also without vizualization does clients use credit card or not
def data_analysis_six(data):
use = [i for i in data["Online"] if i > 0]
not_use = [i for i in data["Online"] if i == 0]
plt.figure(figsize=(14, 12))
plt.hist(use)
plt.hist(not_use)
plt.title("Does the customer use Internet banking facilities")
plt.ylabel("Number of customers")
plt.legend(["use", "not use"])
use_cr = [i for i in data["CreditCard"] if i > 0]
not_use_cr = [i for i in data["CreditCard"] if i == 0]
plt.figure(figsize=(14, 12))
plt.hist(use_cr)
plt.hist(not_use_cr)
plt.title("Does the customer use a credit card issued by Universal Bank")
plt.legend(["Use Credit Card", "Not Use Credit Card"])
plt.ylabel("Number of Customers")
plt.show()
sixth_part = data_analysis_six(data)
sixth_part
# Before starting Machine learning process, we need to see correlations between each data.
# **As we can see Age and Experience columns: They are highly correlated**
# The last vizualization can verify it.
def correlation(data):
plt.figure(figsize=(14, 12))
plt.title("Correlation between each column of data")
sns.heatmap(data.corr(), annot=True)
plt.show() # as we can see Age expirience does have so more correlations
x = data.iloc[:, 1].values
y = data.iloc[:, 2].values
plt.figure(figsize=(14, 12))
plt.title("Checking correlation between Age and Experience")
plt.scatter(x, y)
plt.xlabel("Age")
plt.ylabel("Experience")
res = correlation(
data
) # Our model will not work with it because no one of the X features must have small correlations
res
# Before started predicting, each Data Scientist must know distribution of each columns, it is useful when x feature doesn't have a high correlation, then accuracy can be a high
# **But X feature no one column doesn't have normal distribution, So it's not necessary all times**
def checking_distribution_of_x(data):
X = data.iloc[:, 1:13]
res = {}
for i in X:
res2 = shapiro(X[i])
res[i] = {"statistic": res2.statistic, "p-value": round(res2.pvalue, 100)}
display(
pd.DataFrame(res)
) # as we can see there is no normal distribution, So we can do jsut analyze
all_results = checking_distribution_of_x(data)
all_results
# X features which we need to give to predict does have a high numbers, So we need to Standartized it
def preprocess_data(data):
scaler = MinMaxScaler()
X = data[
[
"Age",
"Experience",
"Income",
"Family",
"CCAvg",
"Education",
"Mortgage",
"Securities Account",
"CD Account",
"Online",
"CreditCard",
]
]
XX = scaler.fit_transform(X)
return XX
X = preprocess_data(data)
X
# After preprocessing data we need to predict it, but as we can see accuracy of this model isn't high. It cannot be useful for production. Let's try another way
def predicting_by_model(data):
y = data["Personal Loan"].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
model = LinearRegression(n_jobs=1)
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
return f"Accuracy of predict by using just linear regression model: {r2_score(y_test, y_predict)}"
predict = predicting_by_model(data)
predict
# **So, our result is not better, let's try optimization algorithm which can help us to better predict**
# Gradinet Descent! Yes! It can be useful when we are going to make accuracy better and correct, Let's try this way.
# OOhhh! 90.4% accuracy! It's Good, we can save it for production and inform our team in bank
def gradient_descent(X, y, learning_rate, epoch):
w = np.zeros(X.shape[1])
b = 0
for _ in range(epoch):
y_pred = np.dot(X, w) + b
dw = (1 / len(y)) * np.dot(X.T, (y_pred - y))
db = (1 / len(y)) * np.sum(y_pred - y)
w -= learning_rate * dw
b -= learning_rate * db
return w, b
X = data[
[
"Age",
"Experience",
"Income",
"Family",
"CCAvg",
"Education",
"Mortgage",
"Securities Account",
"CD Account",
"Online",
"CreditCard",
]
]
y = data["Personal Loan"].values
X = np.c_[np.ones(X.shape[0]), X]
learning_rate = 0.01
epoch = 136
optimized_model = gradient_descent(X, y, learning_rate, epoch)
w, b = optimized_model
def predict(X, w, b, threshold=0.5):
y_pred = np.dot(X, w) + b
y_pred_binary = np.where(y_pred >= threshold, 1, 0)
return y_pred_binary
y_pred = predict(X, w, b)
accuracy = np.mean(y_pred == y)
print("Accuracy:", accuracy)
# After getting enough and good result in this optimization algorithm, we need to save it and show to our team and take a raise!
def store_model_for_production(optmized_model):
with open("best_accuracy.pkl", "wb") as f:
pickle.dump(optmized_model, f)
# The test can be useful for testing, simply because we don't have a tester :))
# I think this is the best result I've ever seen!
with open("best_accuracy.pkl", "rb") as f:
model = pickle.load(f)
w, b = model
def take_model_for_test(X, w, b, threshold=0.5):
X_bias = np.c_[np.zeros(X.shape[0]), X]
y_pred = np.dot(X_bias, w) + b
binary_pred = np.where(y_pred >= threshold, 1, 0)
return binary_pred
input_for_predict = np.array([[1, 25, 1, 49, 91107, 4, 1.6, 1, 0, 0, 1]])
result = take_model_for_test(input_for_predict, w, b)
output = "yes" if result[0] == 1 else "no"
print(output)
| false | 0 | 5,309 | 0 | 5,309 | 5,309 |
||
129405966
|
<jupyter_start><jupyter_text>Indian Restaurants 2023 🍲
# Context
Dineout is a table booking platform helping customers to do table booking in their favourite restaurants for free and help them get great discounts.
The website is well known for displaying user ratings for restaurants, hotels, b&b, touristic attractions, and other places, with a total word count of all reviews is more than 10 million.
# Content
The Dineout dataset includes thousands of restaurants with attributes such as location data, average rating, number of reviews, cuisine types, etc.
The dataset combines the restaurants from the main Indian cities.
# Acknowledgements
Data has been retrieved from the publicly available website https://www.dineout.co.in/.
All the restaurants from the main Indian cities have been scraped in early March 2023.
# Inspiration
To provide further information in regards to the restaurants details that make them successful and appreciated by the users, with the possibility to compare the common features of different European countries regarding the average ratings, awards, open hours, reviews count, etc.
Kaggle dataset identifier: indian-restaurants-2023
<jupyter_script># ### Import Libraries
import os
import sys
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql import functions
from pyspark.sql.functions import desc, col, size, udf, countDistinct, round, count
from pyspark.sql.types import *
MAX_MEMORY = "3G"
conf = (
pyspark.SparkConf()
.setMaster("local[*]")
.set("spark.executor.heartbeatInterval", 10000)
.set("spark.network.timeout", 10000)
.set("spark.core.connection.ack.wait.timeout", "3600")
.set("spark.executor.memory", MAX_MEMORY)
.set("spark.driver.memory", MAX_MEMORY)
)
def init_spark():
return (
SparkSession.builder.appName("Indian Restaurants EDA Pyspark")
.config(conf=conf)
.getOrCreate()
)
spark = init_spark()
spark
# ### Import Dataset
schema = StructType(
[
StructField("Name", StringType()),
StructField("Location", StringType()),
StructField("Locality", StringType()),
StructField("City", StringType()),
StructField("Cuisine", StringType()),
StructField("Rating", FloatType()),
StructField("Votes", IntegerType()),
StructField("Cost", IntegerType()),
]
)
df = spark.read.schema(schema).csv(
"/kaggle/input/indian-restaurants-2023/restaurants.csv", header=True
)
type(df)
df.printSchema()
df.limit(3).toPandas()
# ### Find Missing Values
str_col = ["Name", "Location", "Locality", "City", "Cuisine"]
num_col = ["Votes", "Cost"]
missing_values = {}
for column in df.columns:
if column in str_col:
missing_count = df.filter(
col(column).eqNullSafe(None) | col(column).isNull()
).count()
# eqNullSafe(): Equality test that is safe for null values.
missing_values.update({column: missing_count})
if column in num_col:
missing_count = df.filter(
col(column).isin([np.nan]) | col(column).isNull()
).count()
missing_values.update({column: missing_count})
missing_df = pd.DataFrame.from_dict([missing_values])
missing_df
# ### Utility Functions
def plot_bar(data, labels, xlabel, title):
fig = plt.figure(figsize=[11, 4])
ax = fig.gca()
colors = [
"#f9cdac",
"#f2a49f",
"#ec7c92",
"#e65586",
"#bc438b",
"#933291",
"#692398",
"#551c7b",
"#41155e",
"#2d0f41",
]
plt.barh(labels, data, color=colors)
plt.xlabel(xlabel)
for i, v in enumerate(data):
ax.text(v + 0.5, i, str(v), color="#424242")
plt.title(title)
plt.show()
def plot_pie(data, labels, legend_title, title):
fig = plt.figure(figsize=(10, 5))
ax = fig.gca()
# Creating autocpt arguments
def func(pct):
return "{:.1f}%".format(pct)
colors = [
"#f9cdac",
"#f2a49f",
"#ec7c92",
"#e65586",
"#bc438b",
"#933291",
"#692398",
"#551c7b",
"#41155e",
"#2d0f41",
]
wedges, texts, autotexts = ax.pie(x=data, autopct=lambda pct: func(pct))
ax.legend(
wedges,
labels,
title=legend_title,
loc="center left",
bbox_to_anchor=(1.2, 0, 0.5, 1),
)
plt.setp(autotexts, size=8, weight="normal")
plt.title(title)
plt.show()
# ### Explolatory Data analysis
# #### Top 5 Expensive Restaurants In India
top_expensive_restro = df.sort(col("Cost").desc()).limit(5).toPandas()
plot_bar(
top_expensive_restro.Cost.values,
top_expensive_restro.Name.values,
"Cost",
"Top 5 Expensive Restaurants in India",
)
# #### Restaurants with Maximum Rating and Popularity By City
city_list = [
city[0]
for city in df.dropDuplicates(["City"]).select("City").toPandas().values.tolist()
]
for i, city in enumerate(city_list):
if i % 12 == 0:
print()
print(city, end=",\t")
city_input = "Mumbai"
popular_restro = (
df.filter(col("City") == city_input).sort(col("Votes").desc()).limit(10).toPandas()
)
plot_bar(
popular_restro.Votes.values,
popular_restro.Name.values,
"Votes",
"Top 10 Popular Restaurants in " + city_input,
)
threshold_votes = 20
rating_restro = (
df.filter(col("City") == city_input)
.filter(col("Votes") > threshold_votes)
.sort(col("Rating").desc(), desc(col("Votes")))
.limit(10)
.toPandas()
)
plot_bar(
rating_restro.Rating.values,
rating_restro.Name.values,
"Rating",
"Top 10 Highly Rated Restaurants in " + city_input + " By Users",
)
# #### Relation Between Average Cost And Rating
sns.kdeplot(df.toPandas()["Rating"], fill=True)
plt.title("Ratings distribution")
plt.show()
# On average majority of restaurants have rating between 3.5 to 4.5
sns.kdeplot(df.toPandas()["Cost"], fill=True)
plt.title("Cost distribution")
plt.show()
plt.plot(
"Cost", "Rating", data=df.toPandas().sample(2000), linestyle="none", marker="o"
)
plt.xlabel("Cost")
plt.ylabel("Rating")
plt.title("Relation Between Average Cost And Rating")
plt.show()
def price_range_round(val):
if val <= 1000:
return 1
elif 1000 < val <= 2000:
return 2
elif 2000 < val <= 3000:
return 3
elif 3000 < val <= 4000:
return 4
elif 4000 < val <= 5000:
return 5
elif 5000 < val <= 6000:
return 6
elif 6000 < val <= 7000:
return 7
elif 7000 < val <= 8000:
return 8
udf_price_range = udf(lambda val: price_range_round(val))
df = df.withColumn("price_range", udf_price_range("Cost"))
df.toPandas().head(3)
sns.boxplot(
x="price_range", y="Rating", data=df.toPandas().sort_values(by=["price_range"])
)
plt.title("Relationship between Price range and Ratings")
plt.show()
# This concludes that as the average cost increases, there is a high probablity that the restaurant will be highly rated.
# #### Restaurants with Maximum Outlets
top_outlets = (
df.groupBy(col("Name"))
.agg(count(col("Name")).alias("name_count"))
.filter(col("name_count") > 1)
.sort(desc(col("name_count")))
.limit(10)
.toPandas()
)
top_outlets.head(3)
plot_bar(
top_outlets.name_count.values,
top_outlets.Name.values,
"Outlets",
"Top 10 Restaurants With Maximum Outlets",
)
outlet_input = "Mad Over Donuts"
outlet_destri_cities = (
df.filter(col("Name") == outlet_input)
.groupBy(col("City"))
.agg(count(col("City")).alias("city_count"))
.toPandas()
)
outlet_destri_cities.head()
plot_pie(
outlet_destri_cities.city_count.values.tolist(),
outlet_destri_cities.City.values.tolist(),
"Cities",
outlet_input + " Outlets in Indian Cities",
)
outlet_destri_locality = (
df.filter(col("City") == city_input)
.filter(col("Name") == outlet_input)
.groupBy(col("locality"))
.agg(count(col("locality")).alias("locality_count"))
.toPandas()
)
outlet_destri_locality.head(3)
plot_pie(
outlet_destri_locality.locality_count.values.tolist(),
outlet_destri_locality.locality.values.tolist(),
"Cities",
outlet_input + " Outlets in " + city_input + " Locality",
)
# #### Popular Cuisines By City
cuisines = []
def format_cuisine(line):
cuisine_list = line.split(",")
for cuisine in cuisine_list:
cuisine = cuisine.strip()
if cuisine not in cuisines:
cuisines.append(cuisine)
for line in df.select("Cuisine").collect():
format_cuisine(line.Cuisine)
cuisines_count = {cuisine: 0 for cuisine in cuisines}
df_cuisines_list = (
df.filter(col("City") == city_input)
.select("Cuisine")
.rdd.flatMap(lambda x: x)
.collect()
)
for row in df_cuisines_list:
cuisine_list = row.split(", ")
for cuisine in cuisine_list:
if cuisine == "Multi-Cuisine":
continue
cuisine = cuisine.strip()
cuisines_count[cuisine] += 1
cuisines_count = dict(sorted(cuisines_count.items(), key=lambda x: x[1], reverse=True))
for i, (key, val) in enumerate(cuisines_count.items()):
print(key, val)
if i == 4:
break
plot_bar(
list(cuisines_count.values())[:10],
list(cuisines_count.keys())[:10],
"Restaurants",
"Top 10 Popular Cuisines In " + city_input,
)
# #### Popular Restaurants With A Particular Cuisine
for i, cuisine in enumerate(cuisines):
if i % 12 == 0:
print()
print(cuisine, end=",\t")
cuisine_input = "North Indian"
popular_cuisines = (
df.filter(col("City") == city_input)
.filter(df.Cuisine.contains(cuisine_input))
.sort(desc(col("Votes")), desc(col("Rating")))
)
top = popular_cuisines.count() if popular_cuisines.count() < 10 else 10
top_popular_cuisines = popular_cuisines.limit(top).toPandas()
plot_bar(
top_popular_cuisines.Votes.values,
top_popular_cuisines.Name.values,
"Votes",
"Top "
+ str(top)
+ " Restaurants with "
+ cuisine_input
+ " Cuisine in "
+ city_input
+ " By Popularity",
)
threshold_votes = 20
rating_cuisines = (
df.filter(col("City") == city_input)
.filter(col("Votes") > threshold_votes)
.filter(df.Cuisine.contains(cuisine_input))
.sort(desc(col("Rating")), desc(col("Votes")))
)
top = rating_cuisines.count() if rating_cuisines.count() < 10 else 10
top_rating_cuisines = rating_cuisines.limit(top).toPandas()
plot_bar(
top_rating_cuisines.Rating.values,
top_rating_cuisines.Name.values,
"Rating",
"Top "
+ str(top)
+ " Restaurants with "
+ cuisine_input
+ " Cuisine in "
+ city_input
+ " Rated By Users",
)
plot_pie(
popular_cuisines.toPandas().Locality.value_counts().values.tolist(),
popular_cuisines.toPandas().Locality.value_counts().index.tolist(),
"Localities",
"Restaurants with " + cuisine_input + " Cuisine in " + city_input + " Locality",
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/405/129405966.ipynb
|
indian-restaurants-2023
|
arnabchaki
|
[{"Id": 129405966, "ScriptId": 38103189, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5331678, "CreationDate": "05/13/2023 14:08:19", "VersionNumber": 1.0, "Title": "Indian Restaurants EDA Pyspark", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 259.0, "LinesInsertedFromPrevious": 259.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
|
[{"Id": 185425051, "KernelVersionId": 129405966, "SourceDatasetVersionId": 5538457}]
|
[{"Id": 5538457, "DatasetId": 3192116, "DatasourceVersionId": 5613080, "CreatorUserId": 7428813, "LicenseName": "Database: Open Database, Contents: Database Contents", "CreationDate": "04/27/2023 09:57:57", "VersionNumber": 1.0, "Title": "Indian Restaurants 2023 \ud83c\udf72", "Slug": "indian-restaurants-2023", "Subtitle": "List of restaurants across all across India in 2023", "Description": "# Context\n\nDineout is a table booking platform helping customers to do table booking in their favourite restaurants for free and help them get great discounts.\n\nThe website is well known for displaying user ratings for restaurants, hotels, b&b, touristic attractions, and other places, with a total word count of all reviews is more than 10 million.\n\n# Content\nThe Dineout dataset includes thousands of restaurants with attributes such as location data, average rating, number of reviews, cuisine types, etc.\n\nThe dataset combines the restaurants from the main Indian cities.\n\n# Acknowledgements\nData has been retrieved from the publicly available website https://www.dineout.co.in/.\n\nAll the restaurants from the main Indian cities have been scraped in early March 2023.\n\n# Inspiration\nTo provide further information in regards to the restaurants details that make them successful and appreciated by the users, with the possibility to compare the common features of different European countries regarding the average ratings, awards, open hours, reviews count, etc.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3192116, "CreatorUserId": 7428813, "OwnerUserId": 7428813.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5538457.0, "CurrentDatasourceVersionId": 5613080.0, "ForumId": 3256613, "Type": 2, "CreationDate": "04/27/2023 09:57:57", "LastActivityDate": "04/27/2023", "TotalViews": 13696, "TotalDownloads": 2717, "TotalVotes": 44, "TotalKernels": 10}]
|
[{"Id": 7428813, "UserName": "arnabchaki", "DisplayName": "randomarnab", "RegisterDate": "05/16/2021", "PerformanceTier": 2}]
|
# ### Import Libraries
import os
import sys
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql import functions
from pyspark.sql.functions import desc, col, size, udf, countDistinct, round, count
from pyspark.sql.types import *
MAX_MEMORY = "3G"
conf = (
pyspark.SparkConf()
.setMaster("local[*]")
.set("spark.executor.heartbeatInterval", 10000)
.set("spark.network.timeout", 10000)
.set("spark.core.connection.ack.wait.timeout", "3600")
.set("spark.executor.memory", MAX_MEMORY)
.set("spark.driver.memory", MAX_MEMORY)
)
def init_spark():
return (
SparkSession.builder.appName("Indian Restaurants EDA Pyspark")
.config(conf=conf)
.getOrCreate()
)
spark = init_spark()
spark
# ### Import Dataset
schema = StructType(
[
StructField("Name", StringType()),
StructField("Location", StringType()),
StructField("Locality", StringType()),
StructField("City", StringType()),
StructField("Cuisine", StringType()),
StructField("Rating", FloatType()),
StructField("Votes", IntegerType()),
StructField("Cost", IntegerType()),
]
)
df = spark.read.schema(schema).csv(
"/kaggle/input/indian-restaurants-2023/restaurants.csv", header=True
)
type(df)
df.printSchema()
df.limit(3).toPandas()
# ### Find Missing Values
str_col = ["Name", "Location", "Locality", "City", "Cuisine"]
num_col = ["Votes", "Cost"]
missing_values = {}
for column in df.columns:
if column in str_col:
missing_count = df.filter(
col(column).eqNullSafe(None) | col(column).isNull()
).count()
# eqNullSafe(): Equality test that is safe for null values.
missing_values.update({column: missing_count})
if column in num_col:
missing_count = df.filter(
col(column).isin([np.nan]) | col(column).isNull()
).count()
missing_values.update({column: missing_count})
missing_df = pd.DataFrame.from_dict([missing_values])
missing_df
# ### Utility Functions
def plot_bar(data, labels, xlabel, title):
fig = plt.figure(figsize=[11, 4])
ax = fig.gca()
colors = [
"#f9cdac",
"#f2a49f",
"#ec7c92",
"#e65586",
"#bc438b",
"#933291",
"#692398",
"#551c7b",
"#41155e",
"#2d0f41",
]
plt.barh(labels, data, color=colors)
plt.xlabel(xlabel)
for i, v in enumerate(data):
ax.text(v + 0.5, i, str(v), color="#424242")
plt.title(title)
plt.show()
def plot_pie(data, labels, legend_title, title):
fig = plt.figure(figsize=(10, 5))
ax = fig.gca()
# Creating autocpt arguments
def func(pct):
return "{:.1f}%".format(pct)
colors = [
"#f9cdac",
"#f2a49f",
"#ec7c92",
"#e65586",
"#bc438b",
"#933291",
"#692398",
"#551c7b",
"#41155e",
"#2d0f41",
]
wedges, texts, autotexts = ax.pie(x=data, autopct=lambda pct: func(pct))
ax.legend(
wedges,
labels,
title=legend_title,
loc="center left",
bbox_to_anchor=(1.2, 0, 0.5, 1),
)
plt.setp(autotexts, size=8, weight="normal")
plt.title(title)
plt.show()
# ### Explolatory Data analysis
# #### Top 5 Expensive Restaurants In India
top_expensive_restro = df.sort(col("Cost").desc()).limit(5).toPandas()
plot_bar(
top_expensive_restro.Cost.values,
top_expensive_restro.Name.values,
"Cost",
"Top 5 Expensive Restaurants in India",
)
# #### Restaurants with Maximum Rating and Popularity By City
city_list = [
city[0]
for city in df.dropDuplicates(["City"]).select("City").toPandas().values.tolist()
]
for i, city in enumerate(city_list):
if i % 12 == 0:
print()
print(city, end=",\t")
city_input = "Mumbai"
popular_restro = (
df.filter(col("City") == city_input).sort(col("Votes").desc()).limit(10).toPandas()
)
plot_bar(
popular_restro.Votes.values,
popular_restro.Name.values,
"Votes",
"Top 10 Popular Restaurants in " + city_input,
)
threshold_votes = 20
rating_restro = (
df.filter(col("City") == city_input)
.filter(col("Votes") > threshold_votes)
.sort(col("Rating").desc(), desc(col("Votes")))
.limit(10)
.toPandas()
)
plot_bar(
rating_restro.Rating.values,
rating_restro.Name.values,
"Rating",
"Top 10 Highly Rated Restaurants in " + city_input + " By Users",
)
# #### Relation Between Average Cost And Rating
sns.kdeplot(df.toPandas()["Rating"], fill=True)
plt.title("Ratings distribution")
plt.show()
# On average majority of restaurants have rating between 3.5 to 4.5
sns.kdeplot(df.toPandas()["Cost"], fill=True)
plt.title("Cost distribution")
plt.show()
plt.plot(
"Cost", "Rating", data=df.toPandas().sample(2000), linestyle="none", marker="o"
)
plt.xlabel("Cost")
plt.ylabel("Rating")
plt.title("Relation Between Average Cost And Rating")
plt.show()
def price_range_round(val):
if val <= 1000:
return 1
elif 1000 < val <= 2000:
return 2
elif 2000 < val <= 3000:
return 3
elif 3000 < val <= 4000:
return 4
elif 4000 < val <= 5000:
return 5
elif 5000 < val <= 6000:
return 6
elif 6000 < val <= 7000:
return 7
elif 7000 < val <= 8000:
return 8
udf_price_range = udf(lambda val: price_range_round(val))
df = df.withColumn("price_range", udf_price_range("Cost"))
df.toPandas().head(3)
sns.boxplot(
x="price_range", y="Rating", data=df.toPandas().sort_values(by=["price_range"])
)
plt.title("Relationship between Price range and Ratings")
plt.show()
# This concludes that as the average cost increases, there is a high probablity that the restaurant will be highly rated.
# #### Restaurants with Maximum Outlets
top_outlets = (
df.groupBy(col("Name"))
.agg(count(col("Name")).alias("name_count"))
.filter(col("name_count") > 1)
.sort(desc(col("name_count")))
.limit(10)
.toPandas()
)
top_outlets.head(3)
plot_bar(
top_outlets.name_count.values,
top_outlets.Name.values,
"Outlets",
"Top 10 Restaurants With Maximum Outlets",
)
outlet_input = "Mad Over Donuts"
outlet_destri_cities = (
df.filter(col("Name") == outlet_input)
.groupBy(col("City"))
.agg(count(col("City")).alias("city_count"))
.toPandas()
)
outlet_destri_cities.head()
plot_pie(
outlet_destri_cities.city_count.values.tolist(),
outlet_destri_cities.City.values.tolist(),
"Cities",
outlet_input + " Outlets in Indian Cities",
)
outlet_destri_locality = (
df.filter(col("City") == city_input)
.filter(col("Name") == outlet_input)
.groupBy(col("locality"))
.agg(count(col("locality")).alias("locality_count"))
.toPandas()
)
outlet_destri_locality.head(3)
plot_pie(
outlet_destri_locality.locality_count.values.tolist(),
outlet_destri_locality.locality.values.tolist(),
"Cities",
outlet_input + " Outlets in " + city_input + " Locality",
)
# #### Popular Cuisines By City
cuisines = []
def format_cuisine(line):
cuisine_list = line.split(",")
for cuisine in cuisine_list:
cuisine = cuisine.strip()
if cuisine not in cuisines:
cuisines.append(cuisine)
for line in df.select("Cuisine").collect():
format_cuisine(line.Cuisine)
cuisines_count = {cuisine: 0 for cuisine in cuisines}
df_cuisines_list = (
df.filter(col("City") == city_input)
.select("Cuisine")
.rdd.flatMap(lambda x: x)
.collect()
)
for row in df_cuisines_list:
cuisine_list = row.split(", ")
for cuisine in cuisine_list:
if cuisine == "Multi-Cuisine":
continue
cuisine = cuisine.strip()
cuisines_count[cuisine] += 1
cuisines_count = dict(sorted(cuisines_count.items(), key=lambda x: x[1], reverse=True))
for i, (key, val) in enumerate(cuisines_count.items()):
print(key, val)
if i == 4:
break
plot_bar(
list(cuisines_count.values())[:10],
list(cuisines_count.keys())[:10],
"Restaurants",
"Top 10 Popular Cuisines In " + city_input,
)
# #### Popular Restaurants With A Particular Cuisine
for i, cuisine in enumerate(cuisines):
if i % 12 == 0:
print()
print(cuisine, end=",\t")
cuisine_input = "North Indian"
popular_cuisines = (
df.filter(col("City") == city_input)
.filter(df.Cuisine.contains(cuisine_input))
.sort(desc(col("Votes")), desc(col("Rating")))
)
top = popular_cuisines.count() if popular_cuisines.count() < 10 else 10
top_popular_cuisines = popular_cuisines.limit(top).toPandas()
plot_bar(
top_popular_cuisines.Votes.values,
top_popular_cuisines.Name.values,
"Votes",
"Top "
+ str(top)
+ " Restaurants with "
+ cuisine_input
+ " Cuisine in "
+ city_input
+ " By Popularity",
)
threshold_votes = 20
rating_cuisines = (
df.filter(col("City") == city_input)
.filter(col("Votes") > threshold_votes)
.filter(df.Cuisine.contains(cuisine_input))
.sort(desc(col("Rating")), desc(col("Votes")))
)
top = rating_cuisines.count() if rating_cuisines.count() < 10 else 10
top_rating_cuisines = rating_cuisines.limit(top).toPandas()
plot_bar(
top_rating_cuisines.Rating.values,
top_rating_cuisines.Name.values,
"Rating",
"Top "
+ str(top)
+ " Restaurants with "
+ cuisine_input
+ " Cuisine in "
+ city_input
+ " Rated By Users",
)
plot_pie(
popular_cuisines.toPandas().Locality.value_counts().values.tolist(),
popular_cuisines.toPandas().Locality.value_counts().index.tolist(),
"Localities",
"Restaurants with " + cuisine_input + " Cuisine in " + city_input + " Locality",
)
| false | 0 | 3,336 | 5 | 3,615 | 3,336 |
||
129405875
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from google.cloud import bigquery
client = bigquery.Client()
sql = """
WITH
TopTermsByDate AS (
SELECT DISTINCT refresh_date AS date, term
FROM `bigquery-public-data.google_trends.top_terms`
),
DistinctDates AS (
SELECT DISTINCT date
FROM TopTermsByDate
)
SELECT
DATE_DIFF(Dates2.date, Date1Terms.date, DAY)
AS days_apart,
COUNT(DISTINCT (Dates2.date || Date1Terms.date))
AS num_date_pairs,
COUNT(Date1Terms.term) AS num_date1_terms,
SUM(IF(Date2Terms.term IS NOT NULL, 1, 0))
AS overlap_terms,
SAFE_DIVIDE(
SUM(IF(Date2Terms.term IS NOT NULL, 1, 0)),
COUNT(Date1Terms.term)
) AS pct_overlap_terms
FROM
TopTermsByDate AS Date1Terms
CROSS JOIN
DistinctDates AS Dates2
LEFT JOIN
TopTermsByDate AS Date2Terms
ON
Dates2.date = Date2Terms.date
AND Date1Terms.term = Date2Terms.term
WHERE
Date1Terms.date <= Dates2.date
GROUP BY
days_apart
ORDER BY
days_apart;
"""
pct_overlap_terms_by_days_apart = client.query(sql).to_dataframe()
pct_overlap_terms_by_days_apart.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/405/129405875.ipynb
| null | null |
[{"Id": 129405875, "ScriptId": 38476945, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3327796, "CreationDate": "05/13/2023 14:07:32", "VersionNumber": 1.0, "Title": "notebookf5d8f332cb", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 66.0, "LinesInsertedFromPrevious": 66.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from google.cloud import bigquery
client = bigquery.Client()
sql = """
WITH
TopTermsByDate AS (
SELECT DISTINCT refresh_date AS date, term
FROM `bigquery-public-data.google_trends.top_terms`
),
DistinctDates AS (
SELECT DISTINCT date
FROM TopTermsByDate
)
SELECT
DATE_DIFF(Dates2.date, Date1Terms.date, DAY)
AS days_apart,
COUNT(DISTINCT (Dates2.date || Date1Terms.date))
AS num_date_pairs,
COUNT(Date1Terms.term) AS num_date1_terms,
SUM(IF(Date2Terms.term IS NOT NULL, 1, 0))
AS overlap_terms,
SAFE_DIVIDE(
SUM(IF(Date2Terms.term IS NOT NULL, 1, 0)),
COUNT(Date1Terms.term)
) AS pct_overlap_terms
FROM
TopTermsByDate AS Date1Terms
CROSS JOIN
DistinctDates AS Dates2
LEFT JOIN
TopTermsByDate AS Date2Terms
ON
Dates2.date = Date2Terms.date
AND Date1Terms.term = Date2Terms.term
WHERE
Date1Terms.date <= Dates2.date
GROUP BY
days_apart
ORDER BY
days_apart;
"""
pct_overlap_terms_by_days_apart = client.query(sql).to_dataframe()
pct_overlap_terms_by_days_apart.head()
| false | 0 | 533 | 0 | 533 | 533 |
||
129405058
|
<jupyter_start><jupyter_text>Facebook Live sellers in Thailand, UCI ML Repo
Kaggle dataset identifier: facebook-live-sellers-in-thailand-uci-ml-repo
<jupyter_script># Importamos las librerias necesarias
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from kneed import KneeLocator
from sklearn.metrics import silhouette_score, calinski_harabasz_score
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings("ignore")
# importamos el dataset
df = pd.read_csv("Live.csv")
# Sobre el dataset:
# El conjunto de datos seobtuvo de UCI ML Repository. https://archive.ics.uci.edu/ml/datasets/Facebook+Live+Sellers+in+Thailand
# La venta en directo es cada vez más popular en los países asiáticos. Los pequeños vendedores pueden ahora llegar a un público más amplio y conectar con muchos clientes.
# Documento de investigación: Nassim Dehouche y Apiradee Wongkitrungrueng. Facebook Live como canal de venta directa, 2018, Actas de ANZMAC 2018: La 20ª Conferencia de la Academia de Marketing de Australia y Nueva Zelanda. Adelaida (Australia), 3-5 de diciembre de 2018.
# Páginas de Facebook de 10 vendedores minoristas tailandeses de moda y cosmética. Publicaciones de distinta naturaleza (vídeos, fotos, estados y enlaces). Las métricas de participación consisten en comentarios, comparticiones y reacciones.
# # **Limpieza y transformacion de los datos**
df.shape
# Podemos ver que hay 7050 datos y 16 características en el conjunto de datos.
# En la descripción del conjunto de datos se indica que hay 7051 datos y 12 características, por lo tanto podemos inferir que la primera fila es la cabecera y que hay cuatro características adicionales en el conjunto de datos, vamos a verlo mejor.
# Vista previa del dataset
df.head()
# Resumen del dataset
df.info()
# Miramos los valores nulos
df.isnull().sum()
# Acá vemos las cuatro columnas adicionales, por lo que las vamos a eliminar.
# Eliminamos las columnas
df.drop(["Column1", "Column2", "Column3", "Column4"], axis=1, inplace=True)
# Vemos de nuevo el resumen del dataset para asegurarnos que las columnas fueron eliminadas
df.info()
# Notamos también que hay tres variables categóricas, tipo "object" y el resto son numéricas, tipo "int64".
# Miramos el resumen estadístico de las variables numéricas
df.describe()
# Como hay tres variables categóricas, las vamos a analizar todas.
# Para la primera variable: status_id
df["status_id"].unique()
# Vemos la diferencia que tienen todos estos datos
len(df["status_id"].unique())
# Al ser tantos datos y estar cerca a la cantidad de datos del dataset y como indica su nombre, podemos decir que es un identificador de cada dato, esto no nos sirve para el análisis entonces vamos a eliminar la variable.
# Para la segunda variable: status_type
df["status_type"].unique()
# Vemos la diferencia que tienen todos estos datos
len(df["status_type"].unique())
# Vemos que solo hay cuatro categorías en esta variable y que se refiere al tipo de publicación.
# Para la tercera variable: status_published
df["status_published"].unique()
# Vemos la diferencia que tienen todos estos datos
len(df["status_published"].unique())
# Vemos otra vez que hay 6913 valores unicos en esta variable, al estar cerca a la cantidad de datos del dataset y como indica su nombre, podemos decir que es un identificador de cada dato en este caso por la fecha y hora de cada publicación, esto no nos sirve para el análisis entonces vamos a elimninar la variable.
# ELiminamos las variables que no nos sirven
df.drop(["status_id", "status_published"], axis=1, inplace=True)
# Miramos el resumen otra vez para asegurarnos que se hayan eliminado estas variables
df.info()
# Miramos otra vez como están los primeros valores del dataset
df.head()
# # **Análisis exploratorio**
# Visualización de el tipo de publicación
sns.histplot(df["status_type"], kde=False)
plt.title("Distribución del tipo de publicación")
plt.xlabel("Tipo de publicación")
plt.ylabel("Frecuencia")
plt.show()
# Visualización de la distribución del número de reacciones
sns.histplot(df["num_reactions"], kde=False)
plt.title("Distribución de número de reacciones")
plt.xlabel("Número de reacciones")
plt.ylabel("Frecuencia")
plt.show()
# Visualización de la relación entre el número de likes y el tipo de publicación
sns.boxplot(x="status_type", y="num_likes", data=df)
plt.title("Relación entre el número de likes y el tipo de publicación")
plt.xlabel("Tipo de publicación")
plt.ylabel("Número de likes")
plt.show()
# # **Preparación de los datos y selección de variables**
# Comprobación y tratamiento de valores atípicos
# Vemos la cantidad de atípicos en cada variable con un umbral de 2.1
# Calcular el rango intercuartílico (IQR) para cada variable
stats = df.describe()
Q1 = stats.loc["25%"]
Q3 = stats.loc["75%"]
IQR = Q3 - Q1
# Identificar valores atípicos
outliers = ((df < Q1 - 2.1 * IQR) | (df > Q3 + 2.1 * IQR)).sum()
# Imprimir resultados
print(outliers)
# **Ahora vamos a seleccionar nuestra variable objetivo**
X = df.drop("status_type", axis=1)
y = df["status_type"]
X.head()
# **Cambiar el tipo de dato de las variables categóricas**
le = LabelEncoder()
y = le.fit_transform(y)
# **Copiamos el dataset para luego transformar la variable objetivo categórica en numérica y realizar la matriz de correlación para comprobar si hay variables altamente relacionadas que puedan afectar nuestro análisis.**
df1 = df
df1["status_type"] = le.fit_transform(df1["status_type"])
# Obtener matriz de correlación (coeficiente de correlación de pearson)
corr_matrix = df1.corr()
# Definir umbral para identificar variables altamente correlacionadas
threshold = 0.15
# Definir filtro
filter_ = np.abs(corr_matrix["status_type"]) > threshold
# Obtener variables altamente correlacionadas
corr_features = corr_matrix.columns[filter_].tolist()
# Visualización de matriz de correlación
plt.figure(figsize=(15, 10))
sns.heatmap(df[corr_features].corr(), annot=True, cmap="PiYG", fmt=".2f")
plt.title("Correlación entre variables (Threshold: {})".format(threshold))
# Miramos el resumen de X
X.info()
# Miramos como quedó X
X.head()
# Miramos como quedó nuestro y
np.unique(y)
# Vemos que ahora los valores de la variable objetivo son numéricos y van del 0 al 3, representando el tipo de publicación. \
# 0 es Link \
# 1 es Foto\
# 2 es Estado\
# 3 es Video
# **Ahora hacemos la estandarización de variables**
cols = X.columns
cols
# Estandarizamos los datos con MinMaxScaler
ms = MinMaxScaler()
X = ms.fit_transform(X)
X = pd.DataFrame(X, columns=[cols])
X.head()
# # **Aplicación del algoritmo K-means**
# ## **PCA**
# Modelo K-means con 2 clusters
kmeans = KMeans(init="random", n_clusters=2, n_init=10, max_iter=100, random_state=42)
# Ajuste del algoritmo a las caracteristicas estandarizadas
kmeans.fit(X)
# Argumentos del algoritmo
kmeans_kwargs = {"init": "random", "n_init": 10, "max_iter": 300, "random_state": 42}
# Aplicar PCA para reducir a 3 dimensiones
pca = PCA(n_components=3)
X_pca = pca.fit_transform(X)
# --- Varianza explicada ---
PCA_variance = pd.DataFrame(
{"Varianza explicada (%)": pca.explained_variance_ratio_ * 100}
)
fig, ax = plt.subplots(1, 1, figsize=(4, 3))
bar = sns.barplot(
x=["PC " + str(i) for i in range(1, 4)],
y=PCA_variance["Varianza explicada (%)"],
linewidth=1.5,
edgecolor="k",
color="#4bafb8",
alpha=0.8,
)
plt.show()
PCA_variance
# --- Pesos de las variables que componen las componentes principales ---
pesos_pca = (
pd.DataFrame(pca.components_, columns=X.columns, index=["PC 1", "PC 2", "PC 3"])
.round(2)
.T
)
pesos_pca
# ## Elección del número apropiado de clústeres
# ### Método del codo
# Calcular la inercia para diferentes valores de k
inertia = []
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, **kmeans_kwargs)
kmeans.fit(X_pca)
inertia.append(kmeans.inertia_)
# Graficar la curva del codo
plt.plot(range(1, 11), inertia)
plt.title("Método del codo")
plt.xticks(range(1, 11))
plt.xlabel("Número de clusters")
plt.ylabel("Inercia")
plt.show()
# Seleccion automatica del numero k
kl = KneeLocator(range(1, 11), inertia, curve="convex", direction="decreasing")
kl.elbow
# ### Coeficiente de silueta
# Calcular el silhouette score para diferentes valores de k
silhouette_scores = []
for k in range(2, 11):
kmeans = KMeans(n_clusters=k, **kmeans_kwargs)
labels = kmeans.fit_predict(X_pca)
score = silhouette_score(X_pca, labels)
silhouette_scores.append(score)
# Encontrar el valor de k con el silhouette score más alto
best_k = silhouette_scores.index(max(silhouette_scores)) + 2
print("El número óptimo de clusters es:", best_k)
# Graficamos los resultados
plt.plot(range(2, 11), silhouette_scores)
plt.title("Coeficiente de silueta")
plt.xticks(range(2, 11))
plt.xlabel("Número de clusters")
plt.ylabel("Silhouette Scores")
plt.show()
# ## Afinación de hiperparámetros
# Crear un modelo KMeans
kmeans = KMeans()
# Definición de cuadricula de hiperparametros
param_grid = {
"n_clusters": [2, 3, 4, 5, 6, 7, 8, 9, 10],
"init": ["k-means++", "random"],
"max_iter": [100, 300, 500],
}
# Realizar la búsqueda de cuadrícula
grid_search = GridSearchCV(kmeans, param_grid, cv=5)
grid_search.fit(X_pca)
# Imprimir los mejores hiperparámetros encontrados
print("Los mejores hiperparámetros son:", grid_search.best_params_)
# ## Análisis y evaluación del algoritmo K-means
# Ajustar modelo K-means con 2 clusters
kmeans = KMeans(n_clusters=2, **kmeans_kwargs)
labels = kmeans.fit_predict(X_pca)
# Graficar las muestras en un scatter plot, coloreadas por cluster
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=labels, cmap="viridis")
plt.title("K-means con 2 clusters")
plt.show()
# Calcular el coeficiente de silueta
silhouette_avg = silhouette_score(X_pca, kmeans.labels_)
print("El coeficiente de silueta es:", silhouette_avg)
# Ajustar modelo K-means con 3 clusters
kmeans = KMeans(n_clusters=3, **kmeans_kwargs)
labels = kmeans.fit_predict(X_pca)
# Graficar las muestras en un scatter plot, coloreadas por cluster
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=labels, cmap="viridis")
plt.title("K-means con 3 clusters")
plt.show()
# Calcular el coeficiente de silueta
silhouette_avg = silhouette_score(X_pca, kmeans.labels_)
print("El coeficiente de silueta es:", silhouette_avg)
# El valor del coeficiente de silueta puede variar entre -1 y 1, donde un valor más cercano a 1 indica que las muestras están bien separadas en sus clusters, mientras que un valor cercano a 0 indica que hay superposición entre los clusters.\
# \
# Al hacer el modelo con 3 clusters obtenemos un coeficiente de silueta de 0.831, lo cuál indica que las muestras estan bien separadas en sus clusters.
# Graficar las muestras en un scatter plot en 3D, coloreadas por cluster
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(14, 16))
ax = fig.add_subplot(111, projection="3d")
ax.scatter(X_pca[:, 0], X_pca[:, 1], X_pca[:, 2], c=labels, cmap="viridis")
ax.set_xlabel("PC1")
ax.set_ylabel("PC2")
ax.set_zlabel("PC3")
plt.title("K-means con 3 clusters")
plt.show()
# --- Evaluación del modelo ---
print("Inertia : ", kmeans.inertia_)
print("Silhouette Score : ", silhouette_score(X_pca, labels))
print("Calinski harabasz score: ", calinski_harabasz_score(X_pca, labels))
# **En este caso, la inercia es de 56.80, lo que indica que los clusters están relativamente bien definidos. El valor del Silhouette Score es de ~ 0.83, lo que indica que los clusters están bien separados y se solapan poco. Además, el valor del Calinski Harabasz Score es de 7910, lo que indica que los clusters están bien definidos y que hay una buena separación entre ellos.**
# **En general, estos tres valores indican que el modelo es de buena calidad y que los clusters que se obtuvieron son significativos y bien definidos.**
# --- Agregar agrupación K-Means al conjunto df1 ----
df1["cluster_result"] = labels + 1
df1["cluster_result"] = "Cluster " + df1["cluster_result"].astype(str)
# --- Calcular Meadia de las variables de DF1 ---
df_profile_overall = pd.DataFrame()
df_profile_overall["Overall"] = df1.describe().loc[["mean"]].T
# --- Summarizar media de cada cluster---
df_cluster_summary = (
df1.groupby("cluster_result")
.describe()
.T.reset_index()
.rename(columns={"level_0": "Atributo", "level_1": "Métrica"})
)
df_cluster_summary = df_cluster_summary[
df_cluster_summary["Métrica"] == "mean"
].set_index("Atributo")
# --- Combining Both Data Frame ---
df_profile = df_cluster_summary.join(df_profile_overall).reset_index()
df_profile.round(2)
# 0 es Link \
# 1 es Foto\
# 2 es Estado\
# 3 es Video
# **Clúster 1:**
# Publicaciones sin mucho esfuerzo y calidad, publicaciones constantes donde la cantidad de recciones, likes y comentarios es muy parecida, pueden ser publicaciones de promociones, de tarifas, de horarios o de publicidad.
# **Clúster 2:**
# Publicaciones con muchas reacciones, muchos likes pero con poquitos comentarios y compartidas, esto puede ser debido a que son publicaciones de platos muy apetitosos que hacen que la gente le de like cuando lo vea en su feed pero no comenten ni compartan porque es relativamente simple la publicación.
# **Clúster 3:**
# Publicaciones que se destacan por el gran número de comentarios y de veces compartidas, también tienen bastantes likes y reacciones aunque no tanto como el cluster 2. Tienen también la mayor cantidad de reacciones de me encanta, me divierte, me asombra, me entristece, me enoja. Esto puede ser debido a que estas publicaciones son más virales y la gente es más propensa a comentar y a compartir para que sus amigos las vean.
# Pueden ser publicaciones de recetas y de preparaciones.
# Tabla de contingencia
crosstab = pd.crosstab(df1["status_type"], df1["cluster_result"])
crosstab
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/405/129405058.ipynb
|
facebook-live-sellers-in-thailand-uci-ml-repo
|
ashishg21
|
[{"Id": 129405058, "ScriptId": 38476633, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4923978, "CreationDate": "05/13/2023 14:00:31", "VersionNumber": 1.0, "Title": "K-Means Clustering Python in Spanish", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 409.0, "LinesInsertedFromPrevious": 409.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185423314, "KernelVersionId": 129405058, "SourceDatasetVersionId": 545527}]
|
[{"Id": 545527, "DatasetId": 260377, "DatasourceVersionId": 561994, "CreatorUserId": 1394856, "LicenseName": "Unknown", "CreationDate": "07/11/2019 03:38:01", "VersionNumber": 1.0, "Title": "Facebook Live sellers in Thailand, UCI ML Repo", "Slug": "facebook-live-sellers-in-thailand-uci-ml-repo", "Subtitle": "Live selling is increasingly getting popular in Asian countries.", "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 567089.0, "TotalUncompressedBytes": 567089.0}]
|
[{"Id": 260377, "CreatorUserId": 1394856, "OwnerUserId": 1394856.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 545527.0, "CurrentDatasourceVersionId": 561994.0, "ForumId": 271672, "Type": 2, "CreationDate": "07/11/2019 03:38:01", "LastActivityDate": "07/11/2019", "TotalViews": 29063, "TotalDownloads": 7210, "TotalVotes": 17, "TotalKernels": 15}]
|
[{"Id": 1394856, "UserName": "ashishg21", "DisplayName": "Ashish Gaurav", "RegisterDate": "11/06/2017", "PerformanceTier": 1}]
|
# Importamos las librerias necesarias
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from kneed import KneeLocator
from sklearn.metrics import silhouette_score, calinski_harabasz_score
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings("ignore")
# importamos el dataset
df = pd.read_csv("Live.csv")
# Sobre el dataset:
# El conjunto de datos seobtuvo de UCI ML Repository. https://archive.ics.uci.edu/ml/datasets/Facebook+Live+Sellers+in+Thailand
# La venta en directo es cada vez más popular en los países asiáticos. Los pequeños vendedores pueden ahora llegar a un público más amplio y conectar con muchos clientes.
# Documento de investigación: Nassim Dehouche y Apiradee Wongkitrungrueng. Facebook Live como canal de venta directa, 2018, Actas de ANZMAC 2018: La 20ª Conferencia de la Academia de Marketing de Australia y Nueva Zelanda. Adelaida (Australia), 3-5 de diciembre de 2018.
# Páginas de Facebook de 10 vendedores minoristas tailandeses de moda y cosmética. Publicaciones de distinta naturaleza (vídeos, fotos, estados y enlaces). Las métricas de participación consisten en comentarios, comparticiones y reacciones.
# # **Limpieza y transformacion de los datos**
df.shape
# Podemos ver que hay 7050 datos y 16 características en el conjunto de datos.
# En la descripción del conjunto de datos se indica que hay 7051 datos y 12 características, por lo tanto podemos inferir que la primera fila es la cabecera y que hay cuatro características adicionales en el conjunto de datos, vamos a verlo mejor.
# Vista previa del dataset
df.head()
# Resumen del dataset
df.info()
# Miramos los valores nulos
df.isnull().sum()
# Acá vemos las cuatro columnas adicionales, por lo que las vamos a eliminar.
# Eliminamos las columnas
df.drop(["Column1", "Column2", "Column3", "Column4"], axis=1, inplace=True)
# Vemos de nuevo el resumen del dataset para asegurarnos que las columnas fueron eliminadas
df.info()
# Notamos también que hay tres variables categóricas, tipo "object" y el resto son numéricas, tipo "int64".
# Miramos el resumen estadístico de las variables numéricas
df.describe()
# Como hay tres variables categóricas, las vamos a analizar todas.
# Para la primera variable: status_id
df["status_id"].unique()
# Vemos la diferencia que tienen todos estos datos
len(df["status_id"].unique())
# Al ser tantos datos y estar cerca a la cantidad de datos del dataset y como indica su nombre, podemos decir que es un identificador de cada dato, esto no nos sirve para el análisis entonces vamos a eliminar la variable.
# Para la segunda variable: status_type
df["status_type"].unique()
# Vemos la diferencia que tienen todos estos datos
len(df["status_type"].unique())
# Vemos que solo hay cuatro categorías en esta variable y que se refiere al tipo de publicación.
# Para la tercera variable: status_published
df["status_published"].unique()
# Vemos la diferencia que tienen todos estos datos
len(df["status_published"].unique())
# Vemos otra vez que hay 6913 valores unicos en esta variable, al estar cerca a la cantidad de datos del dataset y como indica su nombre, podemos decir que es un identificador de cada dato en este caso por la fecha y hora de cada publicación, esto no nos sirve para el análisis entonces vamos a elimninar la variable.
# ELiminamos las variables que no nos sirven
df.drop(["status_id", "status_published"], axis=1, inplace=True)
# Miramos el resumen otra vez para asegurarnos que se hayan eliminado estas variables
df.info()
# Miramos otra vez como están los primeros valores del dataset
df.head()
# # **Análisis exploratorio**
# Visualización de el tipo de publicación
sns.histplot(df["status_type"], kde=False)
plt.title("Distribución del tipo de publicación")
plt.xlabel("Tipo de publicación")
plt.ylabel("Frecuencia")
plt.show()
# Visualización de la distribución del número de reacciones
sns.histplot(df["num_reactions"], kde=False)
plt.title("Distribución de número de reacciones")
plt.xlabel("Número de reacciones")
plt.ylabel("Frecuencia")
plt.show()
# Visualización de la relación entre el número de likes y el tipo de publicación
sns.boxplot(x="status_type", y="num_likes", data=df)
plt.title("Relación entre el número de likes y el tipo de publicación")
plt.xlabel("Tipo de publicación")
plt.ylabel("Número de likes")
plt.show()
# # **Preparación de los datos y selección de variables**
# Comprobación y tratamiento de valores atípicos
# Vemos la cantidad de atípicos en cada variable con un umbral de 2.1
# Calcular el rango intercuartílico (IQR) para cada variable
stats = df.describe()
Q1 = stats.loc["25%"]
Q3 = stats.loc["75%"]
IQR = Q3 - Q1
# Identificar valores atípicos
outliers = ((df < Q1 - 2.1 * IQR) | (df > Q3 + 2.1 * IQR)).sum()
# Imprimir resultados
print(outliers)
# **Ahora vamos a seleccionar nuestra variable objetivo**
X = df.drop("status_type", axis=1)
y = df["status_type"]
X.head()
# **Cambiar el tipo de dato de las variables categóricas**
le = LabelEncoder()
y = le.fit_transform(y)
# **Copiamos el dataset para luego transformar la variable objetivo categórica en numérica y realizar la matriz de correlación para comprobar si hay variables altamente relacionadas que puedan afectar nuestro análisis.**
df1 = df
df1["status_type"] = le.fit_transform(df1["status_type"])
# Obtener matriz de correlación (coeficiente de correlación de pearson)
corr_matrix = df1.corr()
# Definir umbral para identificar variables altamente correlacionadas
threshold = 0.15
# Definir filtro
filter_ = np.abs(corr_matrix["status_type"]) > threshold
# Obtener variables altamente correlacionadas
corr_features = corr_matrix.columns[filter_].tolist()
# Visualización de matriz de correlación
plt.figure(figsize=(15, 10))
sns.heatmap(df[corr_features].corr(), annot=True, cmap="PiYG", fmt=".2f")
plt.title("Correlación entre variables (Threshold: {})".format(threshold))
# Miramos el resumen de X
X.info()
# Miramos como quedó X
X.head()
# Miramos como quedó nuestro y
np.unique(y)
# Vemos que ahora los valores de la variable objetivo son numéricos y van del 0 al 3, representando el tipo de publicación. \
# 0 es Link \
# 1 es Foto\
# 2 es Estado\
# 3 es Video
# **Ahora hacemos la estandarización de variables**
cols = X.columns
cols
# Estandarizamos los datos con MinMaxScaler
ms = MinMaxScaler()
X = ms.fit_transform(X)
X = pd.DataFrame(X, columns=[cols])
X.head()
# # **Aplicación del algoritmo K-means**
# ## **PCA**
# Modelo K-means con 2 clusters
kmeans = KMeans(init="random", n_clusters=2, n_init=10, max_iter=100, random_state=42)
# Ajuste del algoritmo a las caracteristicas estandarizadas
kmeans.fit(X)
# Argumentos del algoritmo
kmeans_kwargs = {"init": "random", "n_init": 10, "max_iter": 300, "random_state": 42}
# Aplicar PCA para reducir a 3 dimensiones
pca = PCA(n_components=3)
X_pca = pca.fit_transform(X)
# --- Varianza explicada ---
PCA_variance = pd.DataFrame(
{"Varianza explicada (%)": pca.explained_variance_ratio_ * 100}
)
fig, ax = plt.subplots(1, 1, figsize=(4, 3))
bar = sns.barplot(
x=["PC " + str(i) for i in range(1, 4)],
y=PCA_variance["Varianza explicada (%)"],
linewidth=1.5,
edgecolor="k",
color="#4bafb8",
alpha=0.8,
)
plt.show()
PCA_variance
# --- Pesos de las variables que componen las componentes principales ---
pesos_pca = (
pd.DataFrame(pca.components_, columns=X.columns, index=["PC 1", "PC 2", "PC 3"])
.round(2)
.T
)
pesos_pca
# ## Elección del número apropiado de clústeres
# ### Método del codo
# Calcular la inercia para diferentes valores de k
inertia = []
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, **kmeans_kwargs)
kmeans.fit(X_pca)
inertia.append(kmeans.inertia_)
# Graficar la curva del codo
plt.plot(range(1, 11), inertia)
plt.title("Método del codo")
plt.xticks(range(1, 11))
plt.xlabel("Número de clusters")
plt.ylabel("Inercia")
plt.show()
# Seleccion automatica del numero k
kl = KneeLocator(range(1, 11), inertia, curve="convex", direction="decreasing")
kl.elbow
# ### Coeficiente de silueta
# Calcular el silhouette score para diferentes valores de k
silhouette_scores = []
for k in range(2, 11):
kmeans = KMeans(n_clusters=k, **kmeans_kwargs)
labels = kmeans.fit_predict(X_pca)
score = silhouette_score(X_pca, labels)
silhouette_scores.append(score)
# Encontrar el valor de k con el silhouette score más alto
best_k = silhouette_scores.index(max(silhouette_scores)) + 2
print("El número óptimo de clusters es:", best_k)
# Graficamos los resultados
plt.plot(range(2, 11), silhouette_scores)
plt.title("Coeficiente de silueta")
plt.xticks(range(2, 11))
plt.xlabel("Número de clusters")
plt.ylabel("Silhouette Scores")
plt.show()
# ## Afinación de hiperparámetros
# Crear un modelo KMeans
kmeans = KMeans()
# Definición de cuadricula de hiperparametros
param_grid = {
"n_clusters": [2, 3, 4, 5, 6, 7, 8, 9, 10],
"init": ["k-means++", "random"],
"max_iter": [100, 300, 500],
}
# Realizar la búsqueda de cuadrícula
grid_search = GridSearchCV(kmeans, param_grid, cv=5)
grid_search.fit(X_pca)
# Imprimir los mejores hiperparámetros encontrados
print("Los mejores hiperparámetros son:", grid_search.best_params_)
# ## Análisis y evaluación del algoritmo K-means
# Ajustar modelo K-means con 2 clusters
kmeans = KMeans(n_clusters=2, **kmeans_kwargs)
labels = kmeans.fit_predict(X_pca)
# Graficar las muestras en un scatter plot, coloreadas por cluster
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=labels, cmap="viridis")
plt.title("K-means con 2 clusters")
plt.show()
# Calcular el coeficiente de silueta
silhouette_avg = silhouette_score(X_pca, kmeans.labels_)
print("El coeficiente de silueta es:", silhouette_avg)
# Ajustar modelo K-means con 3 clusters
kmeans = KMeans(n_clusters=3, **kmeans_kwargs)
labels = kmeans.fit_predict(X_pca)
# Graficar las muestras en un scatter plot, coloreadas por cluster
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=labels, cmap="viridis")
plt.title("K-means con 3 clusters")
plt.show()
# Calcular el coeficiente de silueta
silhouette_avg = silhouette_score(X_pca, kmeans.labels_)
print("El coeficiente de silueta es:", silhouette_avg)
# El valor del coeficiente de silueta puede variar entre -1 y 1, donde un valor más cercano a 1 indica que las muestras están bien separadas en sus clusters, mientras que un valor cercano a 0 indica que hay superposición entre los clusters.\
# \
# Al hacer el modelo con 3 clusters obtenemos un coeficiente de silueta de 0.831, lo cuál indica que las muestras estan bien separadas en sus clusters.
# Graficar las muestras en un scatter plot en 3D, coloreadas por cluster
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(14, 16))
ax = fig.add_subplot(111, projection="3d")
ax.scatter(X_pca[:, 0], X_pca[:, 1], X_pca[:, 2], c=labels, cmap="viridis")
ax.set_xlabel("PC1")
ax.set_ylabel("PC2")
ax.set_zlabel("PC3")
plt.title("K-means con 3 clusters")
plt.show()
# --- Evaluación del modelo ---
print("Inertia : ", kmeans.inertia_)
print("Silhouette Score : ", silhouette_score(X_pca, labels))
print("Calinski harabasz score: ", calinski_harabasz_score(X_pca, labels))
# **En este caso, la inercia es de 56.80, lo que indica que los clusters están relativamente bien definidos. El valor del Silhouette Score es de ~ 0.83, lo que indica que los clusters están bien separados y se solapan poco. Además, el valor del Calinski Harabasz Score es de 7910, lo que indica que los clusters están bien definidos y que hay una buena separación entre ellos.**
# **En general, estos tres valores indican que el modelo es de buena calidad y que los clusters que se obtuvieron son significativos y bien definidos.**
# --- Agregar agrupación K-Means al conjunto df1 ----
df1["cluster_result"] = labels + 1
df1["cluster_result"] = "Cluster " + df1["cluster_result"].astype(str)
# --- Calcular Meadia de las variables de DF1 ---
df_profile_overall = pd.DataFrame()
df_profile_overall["Overall"] = df1.describe().loc[["mean"]].T
# --- Summarizar media de cada cluster---
df_cluster_summary = (
df1.groupby("cluster_result")
.describe()
.T.reset_index()
.rename(columns={"level_0": "Atributo", "level_1": "Métrica"})
)
df_cluster_summary = df_cluster_summary[
df_cluster_summary["Métrica"] == "mean"
].set_index("Atributo")
# --- Combining Both Data Frame ---
df_profile = df_cluster_summary.join(df_profile_overall).reset_index()
df_profile.round(2)
# 0 es Link \
# 1 es Foto\
# 2 es Estado\
# 3 es Video
# **Clúster 1:**
# Publicaciones sin mucho esfuerzo y calidad, publicaciones constantes donde la cantidad de recciones, likes y comentarios es muy parecida, pueden ser publicaciones de promociones, de tarifas, de horarios o de publicidad.
# **Clúster 2:**
# Publicaciones con muchas reacciones, muchos likes pero con poquitos comentarios y compartidas, esto puede ser debido a que son publicaciones de platos muy apetitosos que hacen que la gente le de like cuando lo vea en su feed pero no comenten ni compartan porque es relativamente simple la publicación.
# **Clúster 3:**
# Publicaciones que se destacan por el gran número de comentarios y de veces compartidas, también tienen bastantes likes y reacciones aunque no tanto como el cluster 2. Tienen también la mayor cantidad de reacciones de me encanta, me divierte, me asombra, me entristece, me enoja. Esto puede ser debido a que estas publicaciones son más virales y la gente es más propensa a comentar y a compartir para que sus amigos las vean.
# Pueden ser publicaciones de recetas y de preparaciones.
# Tabla de contingencia
crosstab = pd.crosstab(df1["status_type"], df1["cluster_result"])
crosstab
| false | 0 | 4,706 | 0 | 4,750 | 4,706 |
||
129405589
|
import shutil
current_movies_file = "/kaggle/input/hausarbeit12/movie.csv"
current_ratings_file = "/kaggle/input/hausarbeit123/rating.csv"
new_movies_file = "movies.csv"
new_ratings_file = "ratings.csv"
shutil.copyfile(current_movies_file, new_movies_file)
shutil.copyfile(current_ratings_file, new_ratings_file)
import pandas as pd
movies_df = pd.read_csv("movies.csv")
ratings_df = pd.read_csv("ratings.csv")
print(movies_df.columns)
print(ratings_df.columns)
merged_data = pd.merge(movies_df, ratings_df, on="movieId")
print(merged_data.head())
merged_data.isnull().sum().sort_values(ascending=False)
merged_data.shape
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/405/129405589.ipynb
| null | null |
[{"Id": 129405589, "ScriptId": 38474293, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14390918, "CreationDate": "05/13/2023 14:05:01", "VersionNumber": 1.0, "Title": "notebook1ae46f6f24", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 34.0, "LinesInsertedFromPrevious": 34.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import shutil
current_movies_file = "/kaggle/input/hausarbeit12/movie.csv"
current_ratings_file = "/kaggle/input/hausarbeit123/rating.csv"
new_movies_file = "movies.csv"
new_ratings_file = "ratings.csv"
shutil.copyfile(current_movies_file, new_movies_file)
shutil.copyfile(current_ratings_file, new_ratings_file)
import pandas as pd
movies_df = pd.read_csv("movies.csv")
ratings_df = pd.read_csv("ratings.csv")
print(movies_df.columns)
print(ratings_df.columns)
merged_data = pd.merge(movies_df, ratings_df, on="movieId")
print(merged_data.head())
merged_data.isnull().sum().sort_values(ascending=False)
merged_data.shape
| false | 0 | 221 | 0 | 221 | 221 |
||
129591094
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import re
import string
data = pd.read_csv("../input/nlp-getting-started/train.csv")
test_data = pd.read_csv("../input/nlp-getting-started/test.csv")
data.info()
data["target"].value_counts()
test_data.info()
# text proprocessing
# remove url
# remove @someone
# lowercasing
# remove stop words
# stemming and lemmatization
# spell checking. This doesn't help much, maybe it is because this dataset contain few misspelling
# remove punctuation
# normalization
# keyword preprocessing
# split by %20
def remove_url(text):
url_pattern = re.compile(r"https?://t\.co/[^\s]*")
new_text = url_pattern.sub("", text)
return new_text
def remove_at(text):
at_pattern = re.compile(r"@[^\s]*")
new_text = at_pattern.sub("", text)
return new_text
def remove_punctuation(text):
new_text = text.translate(str.maketrans("", "", string.punctuation))
return new_text
# !pip install pyspellchecker
# from spellchecker import SpellChecker
# spell = SpellChecker()
# def spell_check(text):
# word_list = text.split()
# word_list = [spell.correction(w) if w.isalpha() else w for w in word_list]
# word_list = [w for w in word_list if w is not None]
# return " ".join(word_list)
def keyword_preprocess(text):
if pd.notnull(text):
text = text.replace("%20", " ")
else:
text = ""
return text
# from sklearn.feature_extraction.text import CountVectorizer
# from nltk.stem.snowball import EnglishStemmer
# stemmer = EnglishStemmer()
# analyzer = CountVectorizer().build_analyzer()
# def stemmed_words(doc):
# return (stemmer.stem(w) for w in analyzer(doc))
# stem_vectorizer = CountVectorizer(analyzer=stemmed_words)
# X = stem_vectorizer.fit_transform(data['text'][:10])
# print(X.toarray())
# print(stem_vectorizer.get_feature_names_out())
# from nltk import word_tokenize
# from nltk.stem import WordNetLemmatizer
# class LemmaTokenizer:
# def __init__(self):
# self.wnl = WordNetLemmatizer()
# def __call__(self, doc):
# return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
# lemma_vectorizer = CountVectorizer(tokenizer=LemmaTokenizer())
# ## Encoding
def text_preprocess(text):
text = remove_url(text)
text = remove_at(text)
return text
# remove url and @ from text
data["text"] = data["text"].apply(text_preprocess)
test_data["text"] = test_data["text"].apply(text_preprocess)
# remove %20 from keyword
data["keyword"] = data["keyword"].apply(keyword_preprocess)
test_data["keyword"] = test_data["keyword"].apply(keyword_preprocess)
# combine keyword and text
data["keyword_text"] = data.apply(
lambda row: row["keyword"] + " " + row["text"], axis=1
)
test_data["keyword_text"] = test_data.apply(
lambda row: row["keyword"] + " " + row["text"], axis=1
)
# check spell
# data['keyword_text'] = data['keyword_text'].apply(spell_check)
# test_data['keyword_text'] = test_data['keyword_text'].apply(spell_check)
# TF-IDF
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(lowercase=True, stop_words="english", max_features=10000)
corpus = pd.concat([data["keyword_text"], test_data["keyword_text"]])
X = vectorizer.fit_transform(corpus)
train_validate, test = X[:7613], X[7613:]
train_validate.shape
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(
train_validate, data["target"], test_size=0.2, random_state=42
)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
y_valid_predict = clf.predict(X_valid)
from sklearn.metrics import f1_score
f1_score(y_valid, y_valid_predict)
clf = LogisticRegression(random_state=0).fit(train_validate, data["target"])
y_test_predict = clf.predict(test)
submission = pd.DataFrame({"id": test_data["id"], "target": y_test_predict})
submission.to_csv("submission.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/591/129591094.ipynb
| null | null |
[{"Id": 129591094, "ScriptId": 38410462, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14274303, "CreationDate": "05/15/2023 05:03:57", "VersionNumber": 5.0, "Title": "Disaster_Tweets_Preprocessing", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 139.0, "LinesInsertedFromPrevious": 7.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 132.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import re
import string
data = pd.read_csv("../input/nlp-getting-started/train.csv")
test_data = pd.read_csv("../input/nlp-getting-started/test.csv")
data.info()
data["target"].value_counts()
test_data.info()
# text proprocessing
# remove url
# remove @someone
# lowercasing
# remove stop words
# stemming and lemmatization
# spell checking. This doesn't help much, maybe it is because this dataset contain few misspelling
# remove punctuation
# normalization
# keyword preprocessing
# split by %20
def remove_url(text):
url_pattern = re.compile(r"https?://t\.co/[^\s]*")
new_text = url_pattern.sub("", text)
return new_text
def remove_at(text):
at_pattern = re.compile(r"@[^\s]*")
new_text = at_pattern.sub("", text)
return new_text
def remove_punctuation(text):
new_text = text.translate(str.maketrans("", "", string.punctuation))
return new_text
# !pip install pyspellchecker
# from spellchecker import SpellChecker
# spell = SpellChecker()
# def spell_check(text):
# word_list = text.split()
# word_list = [spell.correction(w) if w.isalpha() else w for w in word_list]
# word_list = [w for w in word_list if w is not None]
# return " ".join(word_list)
def keyword_preprocess(text):
if pd.notnull(text):
text = text.replace("%20", " ")
else:
text = ""
return text
# from sklearn.feature_extraction.text import CountVectorizer
# from nltk.stem.snowball import EnglishStemmer
# stemmer = EnglishStemmer()
# analyzer = CountVectorizer().build_analyzer()
# def stemmed_words(doc):
# return (stemmer.stem(w) for w in analyzer(doc))
# stem_vectorizer = CountVectorizer(analyzer=stemmed_words)
# X = stem_vectorizer.fit_transform(data['text'][:10])
# print(X.toarray())
# print(stem_vectorizer.get_feature_names_out())
# from nltk import word_tokenize
# from nltk.stem import WordNetLemmatizer
# class LemmaTokenizer:
# def __init__(self):
# self.wnl = WordNetLemmatizer()
# def __call__(self, doc):
# return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
# lemma_vectorizer = CountVectorizer(tokenizer=LemmaTokenizer())
# ## Encoding
def text_preprocess(text):
text = remove_url(text)
text = remove_at(text)
return text
# remove url and @ from text
data["text"] = data["text"].apply(text_preprocess)
test_data["text"] = test_data["text"].apply(text_preprocess)
# remove %20 from keyword
data["keyword"] = data["keyword"].apply(keyword_preprocess)
test_data["keyword"] = test_data["keyword"].apply(keyword_preprocess)
# combine keyword and text
data["keyword_text"] = data.apply(
lambda row: row["keyword"] + " " + row["text"], axis=1
)
test_data["keyword_text"] = test_data.apply(
lambda row: row["keyword"] + " " + row["text"], axis=1
)
# check spell
# data['keyword_text'] = data['keyword_text'].apply(spell_check)
# test_data['keyword_text'] = test_data['keyword_text'].apply(spell_check)
# TF-IDF
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(lowercase=True, stop_words="english", max_features=10000)
corpus = pd.concat([data["keyword_text"], test_data["keyword_text"]])
X = vectorizer.fit_transform(corpus)
train_validate, test = X[:7613], X[7613:]
train_validate.shape
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(
train_validate, data["target"], test_size=0.2, random_state=42
)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
y_valid_predict = clf.predict(X_valid)
from sklearn.metrics import f1_score
f1_score(y_valid, y_valid_predict)
clf = LogisticRegression(random_state=0).fit(train_validate, data["target"])
y_test_predict = clf.predict(test)
submission = pd.DataFrame({"id": test_data["id"], "target": y_test_predict})
submission.to_csv("submission.csv", index=False)
| false | 0 | 1,282 | 0 | 1,282 | 1,282 |
||
129591392
|
<jupyter_start><jupyter_text>Hotel Booking
### Context
This dataset contains 119390 observations for a City Hotel and a Resort Hotel. Each observation represents a hotel booking between the 1st of July 2015 and 31st of August 2017, including booking that effectively arrived and booking that were canceled.
### Content
Since this is hotel real data, all data elements pertaining hotel or costumer identification were deleted.
Four Columns, 'name', 'email', 'phone number' and 'credit_card' have been artificially created and added to the dataset.
Kaggle dataset identifier: hotel-booking
<jupyter_script># # Business Problem
# In recent years, city Hotel and Resort Hotel have seen high cancellation rates. Each hotel is now dealing with a number of issues as a result, including fewer revenues and less than ideal hotel room use. Consequently, lowering cancellation rates is both hotels primary goal in order to increase their efficiency in genereating revenue, and for us to offer thorough business advice to address this problem.
# The analysis of hotel booking cancellations as well as other factors that have no bearing on their business and yearly revenue generation are the main topics of this report.
# # Assumptions:
# - No unusual occurences between 2015 and 2017 will have a substantial impact on the data used.
# - The information is still current and can be used to analyze a hotel's possible plans in an efficient manner.
# - There are no unanticipated negatives to the hotel employing any advised technique.
# - The hotels are not currently using any of the suggested solutions.
# - The biggest factor affecting the effectiveness of earning income is booking cancellations.
# - Cancellations result in vacant rooms for the booked length of time.
# - Clients make hotel reservations the same year they make cancellations.
# ## Research Question
# - What are the variables that effect hotel reservation cancellations?
# - How can we make hotel reservations cancellations better ?
# - How will hotels be assisted in making pricing and promotional decisions?
# ### Hypothesis
# 1. More cancellations occur when prices are higher.
# 2. When there is a longer waiting list, customers tend to cancel more frequently.
# 3. The majority of clients are coming from offline travel agents to make their reservations.
# # Importing Libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# # Loading Dataset
df = pd.read_csv("/kaggle/input/hotel-booking/hotel_booking.csv")
df
# # Exploratory Data Analysis and Data Cleaning
df.head()
df.tail()
# drop uncessary columns
df.drop(["name", "email", "phone-number", "credit_card"], axis=1, inplace=True)
df.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/591/129591392.ipynb
|
hotel-booking
|
mojtaba142
|
[{"Id": 129591392, "ScriptId": 38534960, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10272507, "CreationDate": "05/15/2023 05:07:41", "VersionNumber": 1.0, "Title": "Data Analysis", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 47.0, "LinesInsertedFromPrevious": 47.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185816743, "KernelVersionId": 129591392, "SourceDatasetVersionId": 2378746}]
|
[{"Id": 2378746, "DatasetId": 1437463, "DatasourceVersionId": 2420641, "CreatorUserId": 3075307, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "06/29/2021 05:15:54", "VersionNumber": 1.0, "Title": "Hotel Booking", "Slug": "hotel-booking", "Subtitle": "Hotel booking demand datasets(Data in Brief:2019)", "Description": "### Context\n\nThis dataset contains 119390 observations for a City Hotel and a Resort Hotel. Each observation represents a hotel booking between the 1st of July 2015 and 31st of August 2017, including booking that effectively arrived and booking that were canceled.\n\n\n\n### Content\n\nSince this is hotel real data, all data elements pertaining hotel or costumer identification were deleted. \nFour Columns, 'name', 'email', 'phone number' and 'credit_card' have been artificially created and added to the dataset. \n\n\n### Acknowledgements\n\nThe data is originally from the article Hotel Booking Demand Datasets, written by Nuno Antonio, Ana Almeida, and Luis Nunes for Data in Brief, Volume 22, February 2019.\n\n\n### Inspiration\n\nYour data will be in front of the world's largest data science community. What questions do you want to see answered?", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1437463, "CreatorUserId": 3075307, "OwnerUserId": 3075307.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2378746.0, "CurrentDatasourceVersionId": 2420641.0, "ForumId": 1456934, "Type": 2, "CreationDate": "06/29/2021 05:15:54", "LastActivityDate": "06/29/2021", "TotalViews": 89277, "TotalDownloads": 12792, "TotalVotes": 156, "TotalKernels": 136}]
|
[{"Id": 3075307, "UserName": "mojtaba142", "DisplayName": "Mojtaba", "RegisterDate": "04/11/2019", "PerformanceTier": 2}]
|
# # Business Problem
# In recent years, city Hotel and Resort Hotel have seen high cancellation rates. Each hotel is now dealing with a number of issues as a result, including fewer revenues and less than ideal hotel room use. Consequently, lowering cancellation rates is both hotels primary goal in order to increase their efficiency in genereating revenue, and for us to offer thorough business advice to address this problem.
# The analysis of hotel booking cancellations as well as other factors that have no bearing on their business and yearly revenue generation are the main topics of this report.
# # Assumptions:
# - No unusual occurences between 2015 and 2017 will have a substantial impact on the data used.
# - The information is still current and can be used to analyze a hotel's possible plans in an efficient manner.
# - There are no unanticipated negatives to the hotel employing any advised technique.
# - The hotels are not currently using any of the suggested solutions.
# - The biggest factor affecting the effectiveness of earning income is booking cancellations.
# - Cancellations result in vacant rooms for the booked length of time.
# - Clients make hotel reservations the same year they make cancellations.
# ## Research Question
# - What are the variables that effect hotel reservation cancellations?
# - How can we make hotel reservations cancellations better ?
# - How will hotels be assisted in making pricing and promotional decisions?
# ### Hypothesis
# 1. More cancellations occur when prices are higher.
# 2. When there is a longer waiting list, customers tend to cancel more frequently.
# 3. The majority of clients are coming from offline travel agents to make their reservations.
# # Importing Libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# # Loading Dataset
df = pd.read_csv("/kaggle/input/hotel-booking/hotel_booking.csv")
df
# # Exploratory Data Analysis and Data Cleaning
df.head()
df.tail()
# drop uncessary columns
df.drop(["name", "email", "phone-number", "credit_card"], axis=1, inplace=True)
df.head()
| false | 1 | 524 | 0 | 674 | 524 |
||
129138822
|
<jupyter_start><jupyter_text>Stack Overflow Annual Developer Survey_2022
Kaggle dataset identifier: stack-overflow-annual-developer-survey-2022
<jupyter_script>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re as re
import warnings
warnings.filterwarnings("ignore")
dt = pd.read_csv(
"/kaggle/input/stack-overflow-annual-developer-survey-2022/survey_results_public.csv"
)
dt.head()
dt.info()
dt = dt[
[
"Country",
"YearsCode",
"EdLevel",
"Employment",
"ConvertedCompYearly",
"Currency",
"Age",
]
]
dt = dt.rename({"ConvertedCompYearly": "Salary"}, axis=1)
dt
dt.shape
# **Handling Missing Values**
plt.figure(figsize=(10, 7))
sns.heatmap(data=dt.isnull())
dt = dt.dropna()
sns.heatmap(dt.isnull())
# ****Data Cleaning****
dt.Currency
def Cuu(uuu):
pattern = re.compile(r"[A-Z][A-Z][A-Z]")
matches = pattern.findall(uuu)
return matches[0]
dt["Currency"] = dt.Currency.map(lambda x: Cuu(x))
dt.Currency.value_counts()
dt.EdLevel.value_counts()
def EdLevel(edlevel):
pattern1 = re.compile(r"^Bach")
pattern2 = re.compile(r"^Mas")
pattern3 = re.compile(r"^Some")
pattern4 = re.compile(r"^Other")
pattern5 = re.compile(r"^Asso")
pattern6 = re.compile(r"^Prof")
pattern7 = re.compile(r"^Prim")
pattern8 = re.compile(r"^Something")
pattern9 = re.compile(r"^Second")
match1 = pattern1.findall(edlevel)
match2 = pattern2.findall(edlevel)
match3 = pattern3.findall(edlevel)
match4 = pattern4.findall(edlevel)
match5 = pattern5.findall(edlevel)
match6 = pattern6.findall(edlevel)
match7 = pattern7.findall(edlevel)
match8 = pattern8.findall(edlevel)
match9 = pattern9.findall(edlevel)
hot = []
if match1 != hot:
return "Bachelors"
if match2 != hot:
return "Masters"
if match3 != hot:
return "college degree"
if match4 != hot:
return "Doctrate degree"
if match5 != hot:
return "Associate degree"
if match6 != hot:
return "Professional degree"
if match7 != hot:
return "Primary education"
if match8 != hot:
return "Other studies"
if match9 != hot:
return "Secoundary education"
dt["EdLevel"] = dt.EdLevel.map(lambda x: EdLevel(x))
dt.EdLevel.value_counts()
dt.Employment.value_counts()
def Employment(employment):
pattern1 = re.compile(r"full-time$")
pattern2 = re.compile(r"^Employ")
pattern3 = re.compile(r"self-employed$")
pattern4 = re.compile(r"part-time$")
pattern5 = re.compile(r"Retired$")
pattern6 = re.compile(r"^Indep")
match1 = pattern1.findall(employment)
match2 = pattern2.findall(employment)
match3 = pattern3.findall(employment)
match4 = pattern4.findall(employment)
match5 = pattern5.findall(employment)
match6 = pattern6.findall(employment)
hot = []
if match1 != hot and match2 != hot:
return "Full-time"
if match6 != hot and match3 != hot:
return "Self-employed"
if match2 != hot and match3 != hot:
return "Hybrid"
if match2 != hot and match4 != hot:
return "Part-time"
if match6 != hot and match4 != hot:
return "Part-time-freelancer"
if match5 != hot:
return "Retired"
else:
return "other"
dt["Employment"] = dt.Employment.map(lambda x: Employment(x))
dt.Employment.value_counts()
def ligit_cataogeries(catagories, cutoff):
cat_map = {}
for i in range(len(catagories)):
if catagories.values[i] >= cutoff:
cat_map[catagories.index[i]] = catagories.index[i]
else:
cat_map[catagories.index[i]] = "other"
return cat_map
country_map = ligit_cataogeries(dt.Country.value_counts(), 200)
dt["Country"] = dt.Country.map(country_map)
dt.Country.value_counts()
Experence_map = ligit_cataogeries(dt.YearsCode.value_counts(), 100)
dt["YearsCode"] = dt.YearsCode.map(Experence_map)
dt.YearsCode.value_counts()
employment_map = ligit_cataogeries(dt.Employment.value_counts(), 100)
dt["Employment"] = dt.Employment.map(employment_map)
dt.Employment.value_counts()
currency_map = ligit_cataogeries(dt.Currency.value_counts(), 100)
dt["Currency"] = dt.Currency.map(currency_map)
dt.Currency.value_counts()
dt.shape
dt.Age.value_counts()
dt.Country.value_counts()
dt.shape
dt
# **EDA**
plt.figure(figsize=(10, 7))
sns.countplot(x=dt["Country"], data=dt)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/138/129138822.ipynb
|
stack-overflow-annual-developer-survey-2022
|
drrajkulkarni
|
[{"Id": 129138822, "ScriptId": 37759269, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8429178, "CreationDate": "05/11/2023 09:28:51", "VersionNumber": 2.0, "Title": "Stack_overflow_Survey_2022", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 172.0, "LinesInsertedFromPrevious": 55.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 117.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184924806, "KernelVersionId": 129138822, "SourceDatasetVersionId": 5524088}]
|
[{"Id": 5524088, "DatasetId": 3184945, "DatasourceVersionId": 5598641, "CreatorUserId": 8429178, "LicenseName": "Unknown", "CreationDate": "04/26/2023 00:46:34", "VersionNumber": 1.0, "Title": "Stack Overflow Annual Developer Survey_2022", "Slug": "stack-overflow-annual-developer-survey-2022", "Subtitle": "Survey records of 74k pluse Developers with 70 pluse columns,Have fun", "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3184945, "CreatorUserId": 8429178, "OwnerUserId": 8429178.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5524088.0, "CurrentDatasourceVersionId": 5598641.0, "ForumId": 3249367, "Type": 2, "CreationDate": "04/26/2023 00:46:34", "LastActivityDate": "04/26/2023", "TotalViews": 79, "TotalDownloads": 7, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 8429178, "UserName": "drrajkulkarni", "DisplayName": "Raj Kulkarni", "RegisterDate": "09/24/2021", "PerformanceTier": 2}]
|
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re as re
import warnings
warnings.filterwarnings("ignore")
dt = pd.read_csv(
"/kaggle/input/stack-overflow-annual-developer-survey-2022/survey_results_public.csv"
)
dt.head()
dt.info()
dt = dt[
[
"Country",
"YearsCode",
"EdLevel",
"Employment",
"ConvertedCompYearly",
"Currency",
"Age",
]
]
dt = dt.rename({"ConvertedCompYearly": "Salary"}, axis=1)
dt
dt.shape
# **Handling Missing Values**
plt.figure(figsize=(10, 7))
sns.heatmap(data=dt.isnull())
dt = dt.dropna()
sns.heatmap(dt.isnull())
# ****Data Cleaning****
dt.Currency
def Cuu(uuu):
pattern = re.compile(r"[A-Z][A-Z][A-Z]")
matches = pattern.findall(uuu)
return matches[0]
dt["Currency"] = dt.Currency.map(lambda x: Cuu(x))
dt.Currency.value_counts()
dt.EdLevel.value_counts()
def EdLevel(edlevel):
pattern1 = re.compile(r"^Bach")
pattern2 = re.compile(r"^Mas")
pattern3 = re.compile(r"^Some")
pattern4 = re.compile(r"^Other")
pattern5 = re.compile(r"^Asso")
pattern6 = re.compile(r"^Prof")
pattern7 = re.compile(r"^Prim")
pattern8 = re.compile(r"^Something")
pattern9 = re.compile(r"^Second")
match1 = pattern1.findall(edlevel)
match2 = pattern2.findall(edlevel)
match3 = pattern3.findall(edlevel)
match4 = pattern4.findall(edlevel)
match5 = pattern5.findall(edlevel)
match6 = pattern6.findall(edlevel)
match7 = pattern7.findall(edlevel)
match8 = pattern8.findall(edlevel)
match9 = pattern9.findall(edlevel)
hot = []
if match1 != hot:
return "Bachelors"
if match2 != hot:
return "Masters"
if match3 != hot:
return "college degree"
if match4 != hot:
return "Doctrate degree"
if match5 != hot:
return "Associate degree"
if match6 != hot:
return "Professional degree"
if match7 != hot:
return "Primary education"
if match8 != hot:
return "Other studies"
if match9 != hot:
return "Secoundary education"
dt["EdLevel"] = dt.EdLevel.map(lambda x: EdLevel(x))
dt.EdLevel.value_counts()
dt.Employment.value_counts()
def Employment(employment):
pattern1 = re.compile(r"full-time$")
pattern2 = re.compile(r"^Employ")
pattern3 = re.compile(r"self-employed$")
pattern4 = re.compile(r"part-time$")
pattern5 = re.compile(r"Retired$")
pattern6 = re.compile(r"^Indep")
match1 = pattern1.findall(employment)
match2 = pattern2.findall(employment)
match3 = pattern3.findall(employment)
match4 = pattern4.findall(employment)
match5 = pattern5.findall(employment)
match6 = pattern6.findall(employment)
hot = []
if match1 != hot and match2 != hot:
return "Full-time"
if match6 != hot and match3 != hot:
return "Self-employed"
if match2 != hot and match3 != hot:
return "Hybrid"
if match2 != hot and match4 != hot:
return "Part-time"
if match6 != hot and match4 != hot:
return "Part-time-freelancer"
if match5 != hot:
return "Retired"
else:
return "other"
dt["Employment"] = dt.Employment.map(lambda x: Employment(x))
dt.Employment.value_counts()
def ligit_cataogeries(catagories, cutoff):
cat_map = {}
for i in range(len(catagories)):
if catagories.values[i] >= cutoff:
cat_map[catagories.index[i]] = catagories.index[i]
else:
cat_map[catagories.index[i]] = "other"
return cat_map
country_map = ligit_cataogeries(dt.Country.value_counts(), 200)
dt["Country"] = dt.Country.map(country_map)
dt.Country.value_counts()
Experence_map = ligit_cataogeries(dt.YearsCode.value_counts(), 100)
dt["YearsCode"] = dt.YearsCode.map(Experence_map)
dt.YearsCode.value_counts()
employment_map = ligit_cataogeries(dt.Employment.value_counts(), 100)
dt["Employment"] = dt.Employment.map(employment_map)
dt.Employment.value_counts()
currency_map = ligit_cataogeries(dt.Currency.value_counts(), 100)
dt["Currency"] = dt.Currency.map(currency_map)
dt.Currency.value_counts()
dt.shape
dt.Age.value_counts()
dt.Country.value_counts()
dt.shape
dt
# **EDA**
plt.figure(figsize=(10, 7))
sns.countplot(x=dt["Country"], data=dt)
| false | 1 | 1,406 | 0 | 1,446 | 1,406 |
||
129138519
|
<jupyter_start><jupyter_text>Walmart Dataset (Retail)
**Dataset Description :**
This is the historical data that covers sales from 2010-02-05 to 2012-11-01, in the file Walmart_Store_sales. Within this file you will find the following fields:
Store - the store number
Date - the week of sales
Weekly_Sales - sales for the given store
Holiday_Flag - whether the week is a special holiday week 1 – Holiday week 0 – Non-holiday week
Temperature - Temperature on the day of sale
Fuel_Price - Cost of fuel in the region
CPI – Prevailing consumer price index
Unemployment - Prevailing unemployment rate
**Holiday Events**
Super Bowl: 12-Feb-10, 11-Feb-11, 10-Feb-12, 8-Feb-13
Labour Day: 10-Sep-10, 9-Sep-11, 7-Sep-12, 6-Sep-13
Thanksgiving: 26-Nov-10, 25-Nov-11, 23-Nov-12, 29-Nov-13
Christmas: 31-Dec-10, 30-Dec-11, 28-Dec-12, 27-Dec-13
**Analysis Tasks**
**Basic Statistics tasks**
1) Which store has maximum sales
2) Which store has maximum standard deviation i.e., the sales vary a lot. Also, find out the coefficient of mean to standard deviation
3) Which store/s has good quarterly growth rate in Q3’2012
4) Some holidays have a negative impact on sales. Find out holidays which have higher sales than the mean sales in non-holiday season for all stores together
5) Provide a monthly and semester view of sales in units and give insights
**Statistical Model**
For Store 1 – Build prediction models to forecast demand
Linear Regression – Utilize variables like date and restructure dates as 1 for 5 Feb 2010 (starting from the earliest date in order). Hypothesize if CPI, unemployment, and fuel price have any impact on sales.
Change dates into days by creating new variable.
Select the model which gives best accuracy.
Kaggle dataset identifier: walmart-dataset-retail
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Objective
# > In this project, we will answer:
# 1. Which store has minimum and maximum sales?
# 2. Which store has maximum standard deviation i.e., the sales vary a lot. Also, find out the coefficient of mean to standard deviation
# 3. Which store/s has good quarterly growth rate in Q3’2012
# 4. Some holidays have a negative impact on sales. Find out holidays which have higher sales than the mean sales in non-holiday season for all stores together
# 5. Provide a monthly and semester view of sales in units and give insights
# 6. Build prediction to forecast demand.
# importing libraries
import pandas as pd
import seaborn as sns
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
# Reading the walmart store sales csv file in the dataframe for further analysis in the following code snippet.
store_data = pd.read_csv("/kaggle/input/walmart-dataset-retail/Walmart_Store_sales.csv")
store_data.head()
# Fetching information about the features in the dataset in the next code snippet.
store_data.info()
# ## Dataset Summary
# > The sales data are available for 45 stores for a duration of 2010-02-05 to 2012-11-01
# > The features are
# 1. Store - The store number
# 2. Date - The week of sales
# 3. Weekly_sales - Weekly sales for a store
# 4. Holiday_flag - 1 for week being holiday and 0 being non-holiday week
# 5. Temperature - Temperature on sale day.
# 6. Fuel price - fuel price in the region.
# 7. CPI - consumer price index
# 8. Unemployment - prevailing unemployment rate.
# converting date to date time format
store_data["Date"] = pd.to_datetime(store_data["Date"])
store_data.head()
store_data["Date"][0]
# checking out null entries in the data
store_data.isnull().sum()
# splitting date into Day month year
store_data["Day"] = pd.DatetimeIndex(store_data["Date"]).day
store_data["Month"] = pd.DatetimeIndex(store_data["Date"]).month
store_data["Year"] = pd.DatetimeIndex(store_data["Date"]).year
store_data.head()
# # Step 1
# > Find out which store having minimum and maximum sales
# The following code is for creating a bar chart of total sales for each store in the dataset.
# The first line of the code sets the size of the figure to be created.
# The next line of code calculates the total weekly sales for each store in the dataset by using the `groupby()` function to group the data by store and summing the `Weekly_Sales` column. The `sort_values()` function sorts the data in ascending order based on total sales.
# The `total_sales_store` variable stores the total sales for each store as a numpy array.
# The next line of code creates a list of colors for the bars in the chart. The bars for the stores with the highest and lowest sales are assigned the color red, while all other bars are assigned the color green.
# The `plot()` function creates the bar chart, with the `kind` parameter set to 'bar' to create a bar chart. The `color` parameter is set to the `colors` list created earlier to assign the appropriate color to each bar.
# The `ticklabel_format()` function is used to format the y-axis labels to be in plain style without an offset.
# Finally, the `title()`, `xlabel()`, and `ylabel()` functions are used to add a title and axis labels to the chart. The title of the chart is "Total sales for each store", the x-axis label is "Store", and the y-axis label is "Total Sales".
plt.figure(figsize=(30, 30))
# finding out total value of weekly sales per store and then soring it
weekly_sales_per_store = store_data.groupby("Store")["Weekly_Sales"].sum().sort_values()
total_sales_store = np.array(weekly_sales_per_store)
# Assigning specific color for the stores that have min and max sales
colors = [
"green"
if ((x < max(total_sales_store)) and (x > min(total_sales_store)))
else "red"
for x in total_sales_store
]
axes = weekly_sales_per_store.plot(kind="bar", color=colors)
plt.ticklabel_format(useOffset=False, style="plain", axis="y")
plt.title("Total sales for each store")
plt.xlabel("Store")
plt.ylabel("Total Sales")
# Min - 33 Max - 20
# # Step 2
# > Finding out which store has maximum standard deviation i.e sales vary a lot. Also, find out coeff of mean to standard deviation.
# The following code is using the dataset to identify the store with the highest standard deviation of weekly sales.
# The first line of the code creates a new DataFrame called "data_std". This DataFrame is created by grouping the original "store_data" DataFrame by store and then calculating the standard deviation of the weekly sales for each store using the "std()" method. The resulting values are then sorted in descending order using the "sort_values()" method.
# The second line of the code prints out a message indicating which store has the highest standard deviation of weekly sales. It does this by selecting the first row of the "data_std" DataFrame (using the "head(1)" method), extracting the index of that row (using the ".index[0]" notation), and then formatting a string that includes the store number and the standard deviation value (using the ".format()" method).
#
data_std = pd.DataFrame(
store_data.groupby("Store")["Weekly_Sales"].std().sort_values(ascending=False)
)
print(
"The store has maximum standard deviation is "
+ str(data_std.head(1).index[0])
+ " with {0:.0f} $".format(data_std.head(1).Weekly_Sales[data_std.head(1).index[0]])
)
# The following code generates a histogram to display the sales distribution of a particular store that has the highest standard deviation of weekly sales in the dataset
#
# Distribution of store that has maximum standard deviation
plt.figure(figsize=(20, 20))
sns.distplot(
store_data[store_data["Store"] == data_std.head(1).index[0]]["Weekly_Sales"]
)
plt.title("The sales Distribution of Store " + str(data_std.head(1).index[0]))
# coefficient of mean to standard deviation
coeff_mean_std = pd.DataFrame(
store_data.groupby("Store")["Weekly_Sales"].std()
/ store_data.groupby("Store")["Weekly_Sales"].mean()
)
coeff_mean_std = coeff_mean_std.rename(
columns={"Weekly_Sales": "Coefficient of mean to standard deviation"}
)
coeff_mean_std
# # Step 3.
# > Finding out which store has good quarterly growth in Q3'2012
#
# one in the duration of logged data 's sum
q3 = (
store_data[
(store_data["Date"] > "2012-07-01") & (store_data["Date"] < "2012-09-30")
]
.groupby("Store")["Weekly_Sales"]
.sum()
)
print("Max sale::" + str(q3.max()))
print("Store index::" + str(q3.idxmax()))
# # Step 4
# > Some holidays have a negative impact on sales. Finding out holidays which have higher sales than the mean sales in non-holiday season for all stores together
# >> Holiday Events:
# - Super Bowl: 12-Feb-10, 11-Feb-11, 10-Feb-12, 8-Feb-13
# - Labour Day: 10-Sep-10, 9-Sep-11, 7-Sep-12, 6-Sep-13
# - Thanksgiving: 26-Nov-10, 25-Nov-11, 23-Nov-12, 29-Nov-13
# - Christmas: 31-Dec-10, 30-Dec-11, 28-Dec-12, 27-Dec-13
# The following code is using the dataset to plot the total weekly sales over time, with vertical lines to indicate holidays.
# First, the code defines a function `plot_line` that takes in a dataframe `df`, a list of holiday dates `holiday_dates`, and a holiday label `holiday_label`. The function creates a line plot of weekly sales over time, with the holiday label as the plot title. Then, for each date in `holiday_dates`, the function adds a vertical line at that date with a red dashed line style. Finally, the function formats the x-axis tick labels to show dates in a human-readable format.
# Next, the code groups the `store_data` dataframe by date and calculates the sum of weekly sales for each date, storing the result in the `total_sales` dataframe.
# Finally, the code calls `plot_line` four times, once for each holiday: Super Bowl, Labor Day, Thanksgiving, and Christmas. Each call to `plot_line` passes in the `total_sales` dataframe as `df`, the corresponding list of holiday dates, and the holiday label. This results in four line plots, each showing the total weekly sales over time with vertical lines at the corresponding holiday dates.
# finding out visually by plotting sales plot
def plot_line(df, holiday_dates, holiday_label):
fig, ax = plt.subplots(figsize=(15, 5))
ax.plot(df["Date"], df["Weekly_Sales"], label=holiday_label)
for day in holiday_dates:
day = datetime.strptime(day, "%d-%m-%Y")
plt.axvline(x=day, linestyle="--", c="r")
plt.title(holiday_label)
x_dates = df["Date"].dt.strftime("%Y-%m-%d").sort_values().unique()
plt.gcf().autofmt_xdate(rotation=90)
plt.show()
total_sales = store_data.groupby("Date")["Weekly_Sales"].sum().reset_index()
Super_Bowl = ["12-2-2010", "11-2-2011", "10-2-2012"]
Labour_Day = ["10-9-2010", "9-9-2011", "7-9-2012"]
Thanksgiving = ["26-11-2010", "25-11-2011", "23-11-2012"]
Christmas = ["31-12-2010", "30-12-2011", "28-12-2012"]
plot_line(total_sales, Super_Bowl, "Super Bowl")
plot_line(total_sales, Labour_Day, "Labour Day")
plot_line(total_sales, Thanksgiving, "Thanksgiving")
plot_line(total_sales, Christmas, "Christmas")
# The sales increased during thanksgiving and the sales decreased during christmas
#
store_data.columns
# # Step 5.
# > Provide a monthly and semester view of sales in units and give insights
# The following creates a new column in a DataFrame called "store_data" called "Month_Year". This new column is created by converting the "Date" column in "store_data" into a Pandas datetime format using the `pd.to_datetime()` method. Then, the `dt.to_period('M')` method is used to extract the year and month information from the "Date" column and store it in the new "Month_Year" column.
#
store_data["Month_Year"] = pd.to_datetime(store_data["Date"]).dt.to_period("M")
# The code is grouping the retail sales data by month and year and calculates the sum of weeklly sales for each group.
monthly_sales = store_data.groupby("Month_Year")["Weekly_Sales"].sum().reset_index()
# Similarly, This aggregates by semester.
store_data["Semester"] = np.where(
store_data["Month"] <= 6, "1st Semester", "2nd Semester"
)
semester_sales = (
store_data.groupby(["Year", "Semester"])["Weekly_Sales"].sum().reset_index()
)
print(monthly_sales)
print(semester_sales)
# Various plot are being generated in subsequent code blocks to analyse semester sales and monthly sales.
# plotting semester sales
plt.figure(figsize=(8, 6))
plt.bar(semester_sales["Semester"], semester_sales["Weekly_Sales"])
plt.title("Semester Sales")
plt.xlabel("Semester")
plt.ylabel("Total Sales in Units")
plt.show()
# plotting monthly sales
plt.figure(figsize=(12, 6))
plt.plot(monthly_sales["Month_Year"].astype(str), monthly_sales["Weekly_Sales"])
plt.title("Monthly Sales")
plt.xlabel("Month")
plt.ylabel("Total Sales in Units")
plt.xticks(rotation=90)
plt.show()
store_data.columns
# # Step 6.
# > Prediction model is built to forecast the demand.
# The dataset is first split into train, test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
store_data.drop("Weekly_Sales", axis=1),
store_data["Weekly_Sales"],
test_size=0.2,
random_state=42,
)
# Various mandatory imports are being made for the next steps
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
# The following codeblocks defines two transformers, `numeric_transformer` and `categorical_transformer`, which will be used to preprocess the numerical and categorical columns of the dataset, respectively.
# The `numeric_transformer` pipeline consists of two steps:
# 1. `SimpleImputer` - This step replaces any missing values in the numerical columns of the dataset with the median value of the column. Missing values are a common issue in real-world datasets, and imputation is a common way to handle them. The median is a robust measure of central tendency that is not affected by outliers and is therefore a good choice to replace missing values.
# 2. `StandardScaler` - This step standardizes the numerical columns by subtracting the mean and dividing by the standard deviation. Standardization is important when working with numerical columns that have different units or scales. It ensures that each feature has the same importance when training a machine learning model.
# The `categorical_transformer` pipeline also consists of two steps:
# 1. `SimpleImputer` - This step replaces any missing values in the categorical columns of the dataset with the most frequent value of the column. This is a common strategy for imputing missing values in categorical data.
# 2. `OneHotEncoder` - This step converts the categorical columns into numerical features using a one-hot encoding strategy. This means that each possible value of the categorical feature is converted into a binary feature, where 1 indicates the presence of the value and 0 indicates the absence. This is a common way to represent categorical data in machine learning models, as it ensures that each category is treated equally. The `handle_unknown='ignore'` argument specifies that any unseen categories should be ignored during encoding, which is a common strategy to avoid overfitting.
# Preprocess the data
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
# In the next step, Numeric and categorical columns are defined seperately.
numeric_cols = ["Temperature", "Fuel_Price", "CPI", "Unemployment"]
categorical_cols = [
"Store",
"Holiday_Flag",
"Day",
"Month",
"Year",
"Month_Year",
"Semester",
]
# In the next code snippet, The aforementioned transformers are applied to whichever datatype applicable/
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_cols),
("cat", categorical_transformer, categorical_cols),
]
)
# Preprocessor is fit on Train set and test set is transformed.
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
# Random forest regressor is being imported for performing predictive analysis followed by predicting on test split and calculating the RMSE score.
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
# Making predictions
y_pred = rf.predict(X_test)
from sklearn.metrics import mean_squared_error
# Evaluating the model
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
print("RMSE:", rmse)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/138/129138519.ipynb
|
walmart-dataset-retail
|
rutuspatel
|
[{"Id": 129138519, "ScriptId": 37837013, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9201105, "CreationDate": "05/11/2023 09:26:23", "VersionNumber": 2.0, "Title": "[Sklearn] Descriptive analysis on Walmart Retail.", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 353.0, "LinesInsertedFromPrevious": 115.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 238.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
|
[{"Id": 184924419, "KernelVersionId": 129138519, "SourceDatasetVersionId": 2746241}]
|
[{"Id": 2746241, "DatasetId": 1674428, "DatasourceVersionId": 2791532, "CreatorUserId": 2727849, "LicenseName": "Unknown", "CreationDate": "10/27/2021 07:03:41", "VersionNumber": 1.0, "Title": "Walmart Dataset (Retail)", "Slug": "walmart-dataset-retail", "Subtitle": "Historical sales data of the walmart store", "Description": "**Dataset Description :**\n\nThis is the historical data that covers sales from 2010-02-05 to 2012-11-01, in the file Walmart_Store_sales. Within this file you will find the following fields:\n\nStore - the store number\n\nDate - the week of sales\n\nWeekly_Sales - sales for the given store\n\nHoliday_Flag - whether the week is a special holiday week 1 \u2013 Holiday week 0 \u2013 Non-holiday week\n\nTemperature - Temperature on the day of sale\n\nFuel_Price - Cost of fuel in the region\n\nCPI \u2013 Prevailing consumer price index\n\nUnemployment - Prevailing unemployment rate\n\n**Holiday Events**\nSuper Bowl: 12-Feb-10, 11-Feb-11, 10-Feb-12, 8-Feb-13\nLabour Day: 10-Sep-10, 9-Sep-11, 7-Sep-12, 6-Sep-13\nThanksgiving: 26-Nov-10, 25-Nov-11, 23-Nov-12, 29-Nov-13\nChristmas: 31-Dec-10, 30-Dec-11, 28-Dec-12, 27-Dec-13\n\n**Analysis Tasks**\n\n**Basic Statistics tasks**\n\n1) Which store has maximum sales\n\n2) Which store has maximum standard deviation i.e., the sales vary a lot. Also, find out the coefficient of mean to standard deviation\n\n3) Which store/s has good quarterly growth rate in Q3\u20192012\n\n4) Some holidays have a negative impact on sales. Find out holidays which have higher sales than the mean sales in non-holiday season for all stores together\n\n5) Provide a monthly and semester view of sales in units and give insights\n\n**Statistical Model**\n\nFor Store 1 \u2013 Build prediction models to forecast demand\n\nLinear Regression \u2013 Utilize variables like date and restructure dates as 1 for 5 Feb 2010 (starting from the earliest date in order). Hypothesize if CPI, unemployment, and fuel price have any impact on sales.\n\nChange dates into days by creating new variable.\n\nSelect the model which gives best accuracy.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1674428, "CreatorUserId": 2727849, "OwnerUserId": 2727849.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2746241.0, "CurrentDatasourceVersionId": 2791532.0, "ForumId": 1695682, "Type": 2, "CreationDate": "10/27/2021 07:03:41", "LastActivityDate": "10/27/2021", "TotalViews": 57581, "TotalDownloads": 7204, "TotalVotes": 78, "TotalKernels": 21}]
|
[{"Id": 2727849, "UserName": "rutuspatel", "DisplayName": "Rutu Patel", "RegisterDate": "01/21/2019", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Objective
# > In this project, we will answer:
# 1. Which store has minimum and maximum sales?
# 2. Which store has maximum standard deviation i.e., the sales vary a lot. Also, find out the coefficient of mean to standard deviation
# 3. Which store/s has good quarterly growth rate in Q3’2012
# 4. Some holidays have a negative impact on sales. Find out holidays which have higher sales than the mean sales in non-holiday season for all stores together
# 5. Provide a monthly and semester view of sales in units and give insights
# 6. Build prediction to forecast demand.
# importing libraries
import pandas as pd
import seaborn as sns
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
# Reading the walmart store sales csv file in the dataframe for further analysis in the following code snippet.
store_data = pd.read_csv("/kaggle/input/walmart-dataset-retail/Walmart_Store_sales.csv")
store_data.head()
# Fetching information about the features in the dataset in the next code snippet.
store_data.info()
# ## Dataset Summary
# > The sales data are available for 45 stores for a duration of 2010-02-05 to 2012-11-01
# > The features are
# 1. Store - The store number
# 2. Date - The week of sales
# 3. Weekly_sales - Weekly sales for a store
# 4. Holiday_flag - 1 for week being holiday and 0 being non-holiday week
# 5. Temperature - Temperature on sale day.
# 6. Fuel price - fuel price in the region.
# 7. CPI - consumer price index
# 8. Unemployment - prevailing unemployment rate.
# converting date to date time format
store_data["Date"] = pd.to_datetime(store_data["Date"])
store_data.head()
store_data["Date"][0]
# checking out null entries in the data
store_data.isnull().sum()
# splitting date into Day month year
store_data["Day"] = pd.DatetimeIndex(store_data["Date"]).day
store_data["Month"] = pd.DatetimeIndex(store_data["Date"]).month
store_data["Year"] = pd.DatetimeIndex(store_data["Date"]).year
store_data.head()
# # Step 1
# > Find out which store having minimum and maximum sales
# The following code is for creating a bar chart of total sales for each store in the dataset.
# The first line of the code sets the size of the figure to be created.
# The next line of code calculates the total weekly sales for each store in the dataset by using the `groupby()` function to group the data by store and summing the `Weekly_Sales` column. The `sort_values()` function sorts the data in ascending order based on total sales.
# The `total_sales_store` variable stores the total sales for each store as a numpy array.
# The next line of code creates a list of colors for the bars in the chart. The bars for the stores with the highest and lowest sales are assigned the color red, while all other bars are assigned the color green.
# The `plot()` function creates the bar chart, with the `kind` parameter set to 'bar' to create a bar chart. The `color` parameter is set to the `colors` list created earlier to assign the appropriate color to each bar.
# The `ticklabel_format()` function is used to format the y-axis labels to be in plain style without an offset.
# Finally, the `title()`, `xlabel()`, and `ylabel()` functions are used to add a title and axis labels to the chart. The title of the chart is "Total sales for each store", the x-axis label is "Store", and the y-axis label is "Total Sales".
plt.figure(figsize=(30, 30))
# finding out total value of weekly sales per store and then soring it
weekly_sales_per_store = store_data.groupby("Store")["Weekly_Sales"].sum().sort_values()
total_sales_store = np.array(weekly_sales_per_store)
# Assigning specific color for the stores that have min and max sales
colors = [
"green"
if ((x < max(total_sales_store)) and (x > min(total_sales_store)))
else "red"
for x in total_sales_store
]
axes = weekly_sales_per_store.plot(kind="bar", color=colors)
plt.ticklabel_format(useOffset=False, style="plain", axis="y")
plt.title("Total sales for each store")
plt.xlabel("Store")
plt.ylabel("Total Sales")
# Min - 33 Max - 20
# # Step 2
# > Finding out which store has maximum standard deviation i.e sales vary a lot. Also, find out coeff of mean to standard deviation.
# The following code is using the dataset to identify the store with the highest standard deviation of weekly sales.
# The first line of the code creates a new DataFrame called "data_std". This DataFrame is created by grouping the original "store_data" DataFrame by store and then calculating the standard deviation of the weekly sales for each store using the "std()" method. The resulting values are then sorted in descending order using the "sort_values()" method.
# The second line of the code prints out a message indicating which store has the highest standard deviation of weekly sales. It does this by selecting the first row of the "data_std" DataFrame (using the "head(1)" method), extracting the index of that row (using the ".index[0]" notation), and then formatting a string that includes the store number and the standard deviation value (using the ".format()" method).
#
data_std = pd.DataFrame(
store_data.groupby("Store")["Weekly_Sales"].std().sort_values(ascending=False)
)
print(
"The store has maximum standard deviation is "
+ str(data_std.head(1).index[0])
+ " with {0:.0f} $".format(data_std.head(1).Weekly_Sales[data_std.head(1).index[0]])
)
# The following code generates a histogram to display the sales distribution of a particular store that has the highest standard deviation of weekly sales in the dataset
#
# Distribution of store that has maximum standard deviation
plt.figure(figsize=(20, 20))
sns.distplot(
store_data[store_data["Store"] == data_std.head(1).index[0]]["Weekly_Sales"]
)
plt.title("The sales Distribution of Store " + str(data_std.head(1).index[0]))
# coefficient of mean to standard deviation
coeff_mean_std = pd.DataFrame(
store_data.groupby("Store")["Weekly_Sales"].std()
/ store_data.groupby("Store")["Weekly_Sales"].mean()
)
coeff_mean_std = coeff_mean_std.rename(
columns={"Weekly_Sales": "Coefficient of mean to standard deviation"}
)
coeff_mean_std
# # Step 3.
# > Finding out which store has good quarterly growth in Q3'2012
#
# one in the duration of logged data 's sum
q3 = (
store_data[
(store_data["Date"] > "2012-07-01") & (store_data["Date"] < "2012-09-30")
]
.groupby("Store")["Weekly_Sales"]
.sum()
)
print("Max sale::" + str(q3.max()))
print("Store index::" + str(q3.idxmax()))
# # Step 4
# > Some holidays have a negative impact on sales. Finding out holidays which have higher sales than the mean sales in non-holiday season for all stores together
# >> Holiday Events:
# - Super Bowl: 12-Feb-10, 11-Feb-11, 10-Feb-12, 8-Feb-13
# - Labour Day: 10-Sep-10, 9-Sep-11, 7-Sep-12, 6-Sep-13
# - Thanksgiving: 26-Nov-10, 25-Nov-11, 23-Nov-12, 29-Nov-13
# - Christmas: 31-Dec-10, 30-Dec-11, 28-Dec-12, 27-Dec-13
# The following code is using the dataset to plot the total weekly sales over time, with vertical lines to indicate holidays.
# First, the code defines a function `plot_line` that takes in a dataframe `df`, a list of holiday dates `holiday_dates`, and a holiday label `holiday_label`. The function creates a line plot of weekly sales over time, with the holiday label as the plot title. Then, for each date in `holiday_dates`, the function adds a vertical line at that date with a red dashed line style. Finally, the function formats the x-axis tick labels to show dates in a human-readable format.
# Next, the code groups the `store_data` dataframe by date and calculates the sum of weekly sales for each date, storing the result in the `total_sales` dataframe.
# Finally, the code calls `plot_line` four times, once for each holiday: Super Bowl, Labor Day, Thanksgiving, and Christmas. Each call to `plot_line` passes in the `total_sales` dataframe as `df`, the corresponding list of holiday dates, and the holiday label. This results in four line plots, each showing the total weekly sales over time with vertical lines at the corresponding holiday dates.
# finding out visually by plotting sales plot
def plot_line(df, holiday_dates, holiday_label):
fig, ax = plt.subplots(figsize=(15, 5))
ax.plot(df["Date"], df["Weekly_Sales"], label=holiday_label)
for day in holiday_dates:
day = datetime.strptime(day, "%d-%m-%Y")
plt.axvline(x=day, linestyle="--", c="r")
plt.title(holiday_label)
x_dates = df["Date"].dt.strftime("%Y-%m-%d").sort_values().unique()
plt.gcf().autofmt_xdate(rotation=90)
plt.show()
total_sales = store_data.groupby("Date")["Weekly_Sales"].sum().reset_index()
Super_Bowl = ["12-2-2010", "11-2-2011", "10-2-2012"]
Labour_Day = ["10-9-2010", "9-9-2011", "7-9-2012"]
Thanksgiving = ["26-11-2010", "25-11-2011", "23-11-2012"]
Christmas = ["31-12-2010", "30-12-2011", "28-12-2012"]
plot_line(total_sales, Super_Bowl, "Super Bowl")
plot_line(total_sales, Labour_Day, "Labour Day")
plot_line(total_sales, Thanksgiving, "Thanksgiving")
plot_line(total_sales, Christmas, "Christmas")
# The sales increased during thanksgiving and the sales decreased during christmas
#
store_data.columns
# # Step 5.
# > Provide a monthly and semester view of sales in units and give insights
# The following creates a new column in a DataFrame called "store_data" called "Month_Year". This new column is created by converting the "Date" column in "store_data" into a Pandas datetime format using the `pd.to_datetime()` method. Then, the `dt.to_period('M')` method is used to extract the year and month information from the "Date" column and store it in the new "Month_Year" column.
#
store_data["Month_Year"] = pd.to_datetime(store_data["Date"]).dt.to_period("M")
# The code is grouping the retail sales data by month and year and calculates the sum of weeklly sales for each group.
monthly_sales = store_data.groupby("Month_Year")["Weekly_Sales"].sum().reset_index()
# Similarly, This aggregates by semester.
store_data["Semester"] = np.where(
store_data["Month"] <= 6, "1st Semester", "2nd Semester"
)
semester_sales = (
store_data.groupby(["Year", "Semester"])["Weekly_Sales"].sum().reset_index()
)
print(monthly_sales)
print(semester_sales)
# Various plot are being generated in subsequent code blocks to analyse semester sales and monthly sales.
# plotting semester sales
plt.figure(figsize=(8, 6))
plt.bar(semester_sales["Semester"], semester_sales["Weekly_Sales"])
plt.title("Semester Sales")
plt.xlabel("Semester")
plt.ylabel("Total Sales in Units")
plt.show()
# plotting monthly sales
plt.figure(figsize=(12, 6))
plt.plot(monthly_sales["Month_Year"].astype(str), monthly_sales["Weekly_Sales"])
plt.title("Monthly Sales")
plt.xlabel("Month")
plt.ylabel("Total Sales in Units")
plt.xticks(rotation=90)
plt.show()
store_data.columns
# # Step 6.
# > Prediction model is built to forecast the demand.
# The dataset is first split into train, test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
store_data.drop("Weekly_Sales", axis=1),
store_data["Weekly_Sales"],
test_size=0.2,
random_state=42,
)
# Various mandatory imports are being made for the next steps
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
# The following codeblocks defines two transformers, `numeric_transformer` and `categorical_transformer`, which will be used to preprocess the numerical and categorical columns of the dataset, respectively.
# The `numeric_transformer` pipeline consists of two steps:
# 1. `SimpleImputer` - This step replaces any missing values in the numerical columns of the dataset with the median value of the column. Missing values are a common issue in real-world datasets, and imputation is a common way to handle them. The median is a robust measure of central tendency that is not affected by outliers and is therefore a good choice to replace missing values.
# 2. `StandardScaler` - This step standardizes the numerical columns by subtracting the mean and dividing by the standard deviation. Standardization is important when working with numerical columns that have different units or scales. It ensures that each feature has the same importance when training a machine learning model.
# The `categorical_transformer` pipeline also consists of two steps:
# 1. `SimpleImputer` - This step replaces any missing values in the categorical columns of the dataset with the most frequent value of the column. This is a common strategy for imputing missing values in categorical data.
# 2. `OneHotEncoder` - This step converts the categorical columns into numerical features using a one-hot encoding strategy. This means that each possible value of the categorical feature is converted into a binary feature, where 1 indicates the presence of the value and 0 indicates the absence. This is a common way to represent categorical data in machine learning models, as it ensures that each category is treated equally. The `handle_unknown='ignore'` argument specifies that any unseen categories should be ignored during encoding, which is a common strategy to avoid overfitting.
# Preprocess the data
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="most_frequent")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
# In the next step, Numeric and categorical columns are defined seperately.
numeric_cols = ["Temperature", "Fuel_Price", "CPI", "Unemployment"]
categorical_cols = [
"Store",
"Holiday_Flag",
"Day",
"Month",
"Year",
"Month_Year",
"Semester",
]
# In the next code snippet, The aforementioned transformers are applied to whichever datatype applicable/
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_cols),
("cat", categorical_transformer, categorical_cols),
]
)
# Preprocessor is fit on Train set and test set is transformed.
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
# Random forest regressor is being imported for performing predictive analysis followed by predicting on test split and calculating the RMSE score.
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
# Making predictions
y_pred = rf.predict(X_test)
from sklearn.metrics import mean_squared_error
# Evaluating the model
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
print("RMSE:", rmse)
| false | 1 | 4,385 | 5 | 4,961 | 4,385 |
||
129138469
|
<jupyter_start><jupyter_text>Food-11 image dataset
### Content
Original dataset can be found [here](https://mmspg.epfl.ch/downloads/food-image-datasets/). The main difference between original and this dataset is that I placed each category of food in separate folder to make model training process more convenient.
This dataset contains 16643 food images grouped in 11 major food categories.
There are 3 splits in this dataset:
- evaluation
- training
- validation
Each split contains 11 categories of food:
- Bread
- Dairy product
- Dessert
- Egg
- Fried food
- Meat
- Noodles-Pasta
- Rice
- Seafood
- Soup
- Vegetable-Fruit
Kaggle dataset identifier: food11-image-dataset
<jupyter_script>import numpy as np
import torch
import os
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.models as models
import torchvision.transforms as transforms
from PIL import Image
from torch.utils.data import Dataset, DataLoader
from torchvision.datasets import DatasetFolder
# This is for the progress bar.
from tqdm.auto import tqdm
import torch.nn as nn
class StudentNet(nn.Module):
def __init__(self, base=16, width_mult=1):
super(StudentNet, self).__init__()
multiplier = [1, 2, 4, 8, 16, 16, 16, 16]
bandwidth = [base * m for m in multiplier] # 每层输出的channel数量
for i in range(3, 7): # 对3/4/5/6层进行剪枝
bandwidth[i] = int(bandwidth[i] * width_mult)
self.cnn = nn.Sequential(
# 我们通常不会拆解第一个卷积
nn.Sequential(
nn.Conv2d(3, bandwidth[0], 3, 1, 1),
nn.BatchNorm2d(bandwidth[0]),
nn.ReLU6(),
nn.MaxPool2d(2, 2, 0),
),
# 接下来的每个Sequential都一样,所以只详细介绍接下来第一个Sequential
nn.Sequential(
# Depthwise Convolution
nn.Conv2d(bandwidth[0], bandwidth[0], 3, 1, 1, groups=bandwidth[0]),
# Batch Normalization
nn.BatchNorm2d(bandwidth[0]),
# 激活函数ReLU6可限制神经元激活值最小为0最大为6
# MobileNet系列都是使用ReLU6。以便于模型量化。
nn.ReLU6(),
# Pointwise Convolution,之后不需要再做ReLU,经验上Pointwise + ReLU效果都会变化。
nn.Conv2d(bandwidth[0], bandwidth[1], 1),
# 每过完一个Block就下采样
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[1], bandwidth[1], 3, 1, 1, groups=bandwidth[1]),
nn.BatchNorm2d(bandwidth[1]),
nn.ReLU6(),
nn.Conv2d(bandwidth[1], bandwidth[2], 1),
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[2], bandwidth[2], 3, 1, 1, groups=bandwidth[2]),
nn.BatchNorm2d(bandwidth[2]),
nn.ReLU6(),
nn.Conv2d(bandwidth[2], bandwidth[3], 1),
nn.MaxPool2d(2, 2, 0),
),
# 目前图片已经进行了多次下采样,所以就不再做MaxPool
nn.Sequential(
nn.Conv2d(bandwidth[3], bandwidth[3], 3, 1, 1, groups=bandwidth[3]),
nn.BatchNorm2d(bandwidth[3]),
nn.ReLU6(),
nn.Conv2d(bandwidth[3], bandwidth[4], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[4], bandwidth[4], 3, 1, 1, groups=bandwidth[4]),
nn.BatchNorm2d(bandwidth[4]),
nn.ReLU6(),
nn.Conv2d(bandwidth[4], bandwidth[5], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[5], bandwidth[5], 3, 1, 1, groups=bandwidth[5]),
nn.BatchNorm2d(bandwidth[5]),
nn.ReLU6(),
nn.Conv2d(bandwidth[5], bandwidth[6], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[6], bandwidth[6], 3, 1, 1, groups=bandwidth[6]),
nn.BatchNorm2d(bandwidth[6]),
nn.ReLU6(),
nn.Conv2d(bandwidth[6], bandwidth[7], 1),
),
# 如果输入图片大小不同,Global Average Pooling会把它们压成相同形状,这样接下来全连接层就不会出现输入尺寸不一致的错误
nn.AdaptiveAvgPool2d((1, 1)),
)
# 直接将CNN的输出映射到11维作为最终输出
self.fc = nn.Sequential(nn.Linear(bandwidth[7], 11))
def forward(self, x):
x = self.cnn(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def loss_fn_kd(outputs, labels, teacher_outputs, T=20, alpha=0.5):
# 一般的Cross Entropy
hard_loss = F.cross_entropy(outputs, labels) * (1.0 - alpha)
soft_loss = nn.KLDivLoss(reduction="batchmean")(
F.log_softmax(outputs / T, dim=1), F.softmax(teacher_outputs / T, dim=1)
) * (alpha * T * T)
return hard_loss + soft_loss
def run_epoch(data_loader, update=True, alpha=0.5):
total_num, total_hit, total_loss = 0, 0, 0
for batch_data in tqdm(data_loader):
# 清空梯度
optimizer.zero_grad()
# 获取数据
inputs, hard_labels = batch_data
# Teacher不用反向传播,所以使用torch.no_grad()
with torch.no_grad():
soft_labels = teacher_net(inputs.to(device))
if update:
logits = student_net(inputs.to(device))
# 使用前面定义的融合soft label&hard label的损失函数:loss_fn_kd,T=20是原论文设定的参数值
loss = loss_fn_kd(logits, hard_labels.to(device), soft_labels, 20, alpha)
loss.backward()
optimizer.step()
else:
# 只是做validation的话,就不用计算梯度
with torch.no_grad():
logits = student_net(inputs.to(device))
loss = loss_fn_kd(
logits, hard_labels.to(device), soft_labels, 20, alpha
)
total_hit += torch.sum(
torch.argmax(logits, dim=1) == hard_labels.to(device)
).item()
total_num += len(inputs)
total_loss += loss.item() * len(inputs)
return total_loss / total_num, total_hit / total_num
if __name__ == "__main__":
# config
batch_size = 32
cuda = True
epochs = 10
# "cuda" only when GPUs are available.
device = "cuda:1" if torch.cuda.is_available() else "cpu"
# 加载数据
trainTransform = transforms.Compose(
[
transforms.RandomCrop(256, pad_if_needed=True, padding_mode="symmetric"),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
]
)
testTransform = transforms.Compose(
[
transforms.CenterCrop(256),
transforms.ToTensor(),
]
)
train_set = DatasetFolder(
"/kaggle/input/food11-image-dataset/training",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=trainTransform,
)
valid_set = DatasetFolder(
"/kaggle/input/food11-image-dataset/validation",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=testTransform,
)
train_dataloader = DataLoader(
train_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
valid_dataloader = DataLoader(
valid_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
print("Data Loaded")
# 加载网络
teacher_net = models.resnet18(pretrained=False, num_classes=11)
teacher_net.load_state_dict(
torch.load(f"/kaggle/input/teacher-resnet18/teacher_resnet18.bin")
)
student_net = StudentNet(base=16)
if cuda:
teacher_net = teacher_net.cuda(device)
student_net = student_net.cuda(device)
print("Model Loaded")
# 开始训练(Knowledge Distillation)
print("Training Started")
optimizer = optim.AdamW(student_net.parameters(), lr=1e-3)
teacher_net.eval()
now_best_acc = 0
for epoch in range(epochs):
student_net.train()
train_loss, train_acc = run_epoch(train_dataloader, update=True)
student_net.eval()
valid_loss, valid_acc = run_epoch(valid_dataloader, update=False)
# 存下最好的model
if valid_acc > now_best_acc:
now_best_acc = valid_acc
torch.save(student_net.state_dict(), "./student_model.bin")
print(
"epoch {:>3d}: train loss: {:6.4f}, acc {:6.4f} valid loss: {:6.4f}, acc {:6.4f}".format(
epoch, train_loss, train_acc, valid_loss, valid_acc
)
)
# 运行一个epoch
def run_teacher_epoch(data_loader, update=True, alpha=0.5):
total_num, total_hit, total_loss, teacher_total_hit = 0, 0, 0, 0
for batch_data in tqdm(data_loader):
# 清空梯度
optimizer.zero_grad()
# 获取数据
inputs, hard_labels = batch_data
# Teacher不用反向传播,所以使用torch.no_grad()
with torch.no_grad():
soft_labels = teacher_net(inputs.to(device))
if update:
logits = student_net(inputs.to(device))
# 使用前面定义的融合soft label&hard label的损失函数:loss_fn_kd,T=20是原论文设定的参数值
loss = loss_fn_kd(logits, hard_labels, soft_labels, 20, alpha)
loss.backward()
optimizer.step()
else:
# 只是做validation的话,就不用计算梯度
with torch.no_grad():
logits = student_net(inputs.to(device))
loss = loss_fn_kd(
logits, hard_labels.to(device), soft_labels, 20, alpha
)
teacher_total_hit += torch.sum(
torch.argmax(soft_labels, dim=1) == hard_labels.to(device)
).item()
total_hit += torch.sum(
torch.argmax(logits, dim=1) == hard_labels.to(device)
).item()
total_num += len(inputs)
total_loss += loss.item() * len(inputs)
return total_loss / total_num, total_hit / total_num, teacher_total_hit / total_num
student_net.eval()
valid_loss, valid_acc, teacher_acc = run_teacher_epoch(valid_dataloader, update=False)
print(valid_acc, teacher_acc)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/138/129138469.ipynb
|
food11-image-dataset
|
trolukovich
|
[{"Id": 129138469, "ScriptId": 38387238, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15050561, "CreationDate": "05/11/2023 09:26:04", "VersionNumber": 3.0, "Title": "8.2Knowledge distillation", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 235.0, "LinesInsertedFromPrevious": 36.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 199.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184924325, "KernelVersionId": 129138469, "SourceDatasetVersionId": 821742}]
|
[{"Id": 821742, "DatasetId": 432700, "DatasourceVersionId": 844288, "CreatorUserId": 2674691, "LicenseName": "CC0: Public Domain", "CreationDate": "12/02/2019 16:39:54", "VersionNumber": 1.0, "Title": "Food-11 image dataset", "Slug": "food11-image-dataset", "Subtitle": "16643 food images grouped in 11 major food categories", "Description": "### Content\n\nOriginal dataset can be found [here](https://mmspg.epfl.ch/downloads/food-image-datasets/). The main difference between original and this dataset is that I placed each category of food in separate folder to make model training process more convenient.\n\nThis dataset contains 16643 food images grouped in 11 major food categories.\n\nThere are 3 splits in this dataset: \n\n- evaluation\n\n- training\n\n- validation\n\nEach split contains 11 categories of food:\n\n- Bread\n\n- Dairy product\n\n- Dessert\n\n- Egg\n\n- Fried food\n\n- Meat\n\n- Noodles-Pasta\n\n- Rice\n\n- Seafood\n\n- Soup\n\n- Vegetable-Fruit", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 432700, "CreatorUserId": 2674691, "OwnerUserId": 2674691.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 821742.0, "CurrentDatasourceVersionId": 844288.0, "ForumId": 445355, "Type": 2, "CreationDate": "12/02/2019 16:39:54", "LastActivityDate": "12/02/2019", "TotalViews": 44766, "TotalDownloads": 11809, "TotalVotes": 83, "TotalKernels": 16}]
|
[{"Id": 2674691, "UserName": "trolukovich", "DisplayName": "Aleksandr Antonov", "RegisterDate": "01/05/2019", "PerformanceTier": 2}]
|
import numpy as np
import torch
import os
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.models as models
import torchvision.transforms as transforms
from PIL import Image
from torch.utils.data import Dataset, DataLoader
from torchvision.datasets import DatasetFolder
# This is for the progress bar.
from tqdm.auto import tqdm
import torch.nn as nn
class StudentNet(nn.Module):
def __init__(self, base=16, width_mult=1):
super(StudentNet, self).__init__()
multiplier = [1, 2, 4, 8, 16, 16, 16, 16]
bandwidth = [base * m for m in multiplier] # 每层输出的channel数量
for i in range(3, 7): # 对3/4/5/6层进行剪枝
bandwidth[i] = int(bandwidth[i] * width_mult)
self.cnn = nn.Sequential(
# 我们通常不会拆解第一个卷积
nn.Sequential(
nn.Conv2d(3, bandwidth[0], 3, 1, 1),
nn.BatchNorm2d(bandwidth[0]),
nn.ReLU6(),
nn.MaxPool2d(2, 2, 0),
),
# 接下来的每个Sequential都一样,所以只详细介绍接下来第一个Sequential
nn.Sequential(
# Depthwise Convolution
nn.Conv2d(bandwidth[0], bandwidth[0], 3, 1, 1, groups=bandwidth[0]),
# Batch Normalization
nn.BatchNorm2d(bandwidth[0]),
# 激活函数ReLU6可限制神经元激活值最小为0最大为6
# MobileNet系列都是使用ReLU6。以便于模型量化。
nn.ReLU6(),
# Pointwise Convolution,之后不需要再做ReLU,经验上Pointwise + ReLU效果都会变化。
nn.Conv2d(bandwidth[0], bandwidth[1], 1),
# 每过完一个Block就下采样
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[1], bandwidth[1], 3, 1, 1, groups=bandwidth[1]),
nn.BatchNorm2d(bandwidth[1]),
nn.ReLU6(),
nn.Conv2d(bandwidth[1], bandwidth[2], 1),
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[2], bandwidth[2], 3, 1, 1, groups=bandwidth[2]),
nn.BatchNorm2d(bandwidth[2]),
nn.ReLU6(),
nn.Conv2d(bandwidth[2], bandwidth[3], 1),
nn.MaxPool2d(2, 2, 0),
),
# 目前图片已经进行了多次下采样,所以就不再做MaxPool
nn.Sequential(
nn.Conv2d(bandwidth[3], bandwidth[3], 3, 1, 1, groups=bandwidth[3]),
nn.BatchNorm2d(bandwidth[3]),
nn.ReLU6(),
nn.Conv2d(bandwidth[3], bandwidth[4], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[4], bandwidth[4], 3, 1, 1, groups=bandwidth[4]),
nn.BatchNorm2d(bandwidth[4]),
nn.ReLU6(),
nn.Conv2d(bandwidth[4], bandwidth[5], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[5], bandwidth[5], 3, 1, 1, groups=bandwidth[5]),
nn.BatchNorm2d(bandwidth[5]),
nn.ReLU6(),
nn.Conv2d(bandwidth[5], bandwidth[6], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[6], bandwidth[6], 3, 1, 1, groups=bandwidth[6]),
nn.BatchNorm2d(bandwidth[6]),
nn.ReLU6(),
nn.Conv2d(bandwidth[6], bandwidth[7], 1),
),
# 如果输入图片大小不同,Global Average Pooling会把它们压成相同形状,这样接下来全连接层就不会出现输入尺寸不一致的错误
nn.AdaptiveAvgPool2d((1, 1)),
)
# 直接将CNN的输出映射到11维作为最终输出
self.fc = nn.Sequential(nn.Linear(bandwidth[7], 11))
def forward(self, x):
x = self.cnn(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def loss_fn_kd(outputs, labels, teacher_outputs, T=20, alpha=0.5):
# 一般的Cross Entropy
hard_loss = F.cross_entropy(outputs, labels) * (1.0 - alpha)
soft_loss = nn.KLDivLoss(reduction="batchmean")(
F.log_softmax(outputs / T, dim=1), F.softmax(teacher_outputs / T, dim=1)
) * (alpha * T * T)
return hard_loss + soft_loss
def run_epoch(data_loader, update=True, alpha=0.5):
total_num, total_hit, total_loss = 0, 0, 0
for batch_data in tqdm(data_loader):
# 清空梯度
optimizer.zero_grad()
# 获取数据
inputs, hard_labels = batch_data
# Teacher不用反向传播,所以使用torch.no_grad()
with torch.no_grad():
soft_labels = teacher_net(inputs.to(device))
if update:
logits = student_net(inputs.to(device))
# 使用前面定义的融合soft label&hard label的损失函数:loss_fn_kd,T=20是原论文设定的参数值
loss = loss_fn_kd(logits, hard_labels.to(device), soft_labels, 20, alpha)
loss.backward()
optimizer.step()
else:
# 只是做validation的话,就不用计算梯度
with torch.no_grad():
logits = student_net(inputs.to(device))
loss = loss_fn_kd(
logits, hard_labels.to(device), soft_labels, 20, alpha
)
total_hit += torch.sum(
torch.argmax(logits, dim=1) == hard_labels.to(device)
).item()
total_num += len(inputs)
total_loss += loss.item() * len(inputs)
return total_loss / total_num, total_hit / total_num
if __name__ == "__main__":
# config
batch_size = 32
cuda = True
epochs = 10
# "cuda" only when GPUs are available.
device = "cuda:1" if torch.cuda.is_available() else "cpu"
# 加载数据
trainTransform = transforms.Compose(
[
transforms.RandomCrop(256, pad_if_needed=True, padding_mode="symmetric"),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
]
)
testTransform = transforms.Compose(
[
transforms.CenterCrop(256),
transforms.ToTensor(),
]
)
train_set = DatasetFolder(
"/kaggle/input/food11-image-dataset/training",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=trainTransform,
)
valid_set = DatasetFolder(
"/kaggle/input/food11-image-dataset/validation",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=testTransform,
)
train_dataloader = DataLoader(
train_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
valid_dataloader = DataLoader(
valid_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
print("Data Loaded")
# 加载网络
teacher_net = models.resnet18(pretrained=False, num_classes=11)
teacher_net.load_state_dict(
torch.load(f"/kaggle/input/teacher-resnet18/teacher_resnet18.bin")
)
student_net = StudentNet(base=16)
if cuda:
teacher_net = teacher_net.cuda(device)
student_net = student_net.cuda(device)
print("Model Loaded")
# 开始训练(Knowledge Distillation)
print("Training Started")
optimizer = optim.AdamW(student_net.parameters(), lr=1e-3)
teacher_net.eval()
now_best_acc = 0
for epoch in range(epochs):
student_net.train()
train_loss, train_acc = run_epoch(train_dataloader, update=True)
student_net.eval()
valid_loss, valid_acc = run_epoch(valid_dataloader, update=False)
# 存下最好的model
if valid_acc > now_best_acc:
now_best_acc = valid_acc
torch.save(student_net.state_dict(), "./student_model.bin")
print(
"epoch {:>3d}: train loss: {:6.4f}, acc {:6.4f} valid loss: {:6.4f}, acc {:6.4f}".format(
epoch, train_loss, train_acc, valid_loss, valid_acc
)
)
# 运行一个epoch
def run_teacher_epoch(data_loader, update=True, alpha=0.5):
total_num, total_hit, total_loss, teacher_total_hit = 0, 0, 0, 0
for batch_data in tqdm(data_loader):
# 清空梯度
optimizer.zero_grad()
# 获取数据
inputs, hard_labels = batch_data
# Teacher不用反向传播,所以使用torch.no_grad()
with torch.no_grad():
soft_labels = teacher_net(inputs.to(device))
if update:
logits = student_net(inputs.to(device))
# 使用前面定义的融合soft label&hard label的损失函数:loss_fn_kd,T=20是原论文设定的参数值
loss = loss_fn_kd(logits, hard_labels, soft_labels, 20, alpha)
loss.backward()
optimizer.step()
else:
# 只是做validation的话,就不用计算梯度
with torch.no_grad():
logits = student_net(inputs.to(device))
loss = loss_fn_kd(
logits, hard_labels.to(device), soft_labels, 20, alpha
)
teacher_total_hit += torch.sum(
torch.argmax(soft_labels, dim=1) == hard_labels.to(device)
).item()
total_hit += torch.sum(
torch.argmax(logits, dim=1) == hard_labels.to(device)
).item()
total_num += len(inputs)
total_loss += loss.item() * len(inputs)
return total_loss / total_num, total_hit / total_num, teacher_total_hit / total_num
student_net.eval()
valid_loss, valid_acc, teacher_acc = run_teacher_epoch(valid_dataloader, update=False)
print(valid_acc, teacher_acc)
| false | 0 | 2,772 | 0 | 2,983 | 2,772 |
||
129138952
|
import numpy as np
import torch
import os
import pickle
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.models as models
import torchvision.transforms as transforms
from PIL import Image
from torch.utils.data import Dataset, DataLoader
from torchvision.datasets import DatasetFolder
import torch.nn.utils.prune as prune
from tqdm.auto import tqdm
def network_slimming(old_model, new_model):
old_params = old_model.state_dict()
new_params = new_model.state_dict()
# 只保留每一层中的部分卷积核
selected_idx = []
for i in range(8): # 只对模型中CNN部分(8个Sequential)进行剪枝
gamma = old_params[f"cnn.{i}.1.weight"]
new_dim = len(new_params[f"cnn.{i}.1.weight"])
ranking = torch.argsort(gamma, descending=True)
selected_idx.append(ranking[:new_dim])
now_processing = 1 # 当前在处理哪一个Sequential,索引为0的Sequential不需处理
for param_name, weights in old_params.items():
# 如果是CNN层,则根据gamma仅复制部分参数;如果是FC层或者该参数只有一个数字(例如batchnorm的tracenum等等)就直接全部复制
if (
param_name.startswith("cnn")
and weights.size() != torch.Size([])
and now_processing != len(selected_idx)
):
# 当处理到Pointwise Convolution时,则代表正在处理的Sequential已处理完毕
if param_name.startswith(f"cnn.{now_processing}.3"):
now_processing += 1
# Pointwise Convolution的参数会受前一个Sequential和后一个Sequential剪枝情况的影响,因此需要特别处理
if param_name.endswith("3.weight"):
# 不需要删除最后一个Sequential中的Pointwise卷积核
if len(selected_idx) == now_processing:
# selected_idx[now_processing-1]指当前Sequential中保留的通道的索引
new_params[param_name] = weights[
:, selected_idx[now_processing - 1]
]
# 除了最后一个Sequential,每个Sequential中卷积核的数量(输出通道数)都要和后一个Sequential匹配。
else:
# Pointwise Convolution中Conv2d(x,y,1)的weight的形状是(y,x,1,1)
# selected_idx[now_processing]指后一个Sequential中保留的通道的索引
# selected_idx[now_processing-1]指当前Sequential中保留的通道的索引
new_params[param_name] = weights[selected_idx[now_processing]][
:, selected_idx[now_processing - 1]
]
else:
new_params[param_name] = weights[selected_idx[now_processing]]
else:
new_params[param_name] = weights
# 返回新模型
new_model.load_state_dict(new_params)
return new_model
def run_epoch(data_loader):
total_num, total_hit, total_loss = 0, 0, 0
for batch_data in tqdm(data_loader):
# for now_step, batch_data in enumerate(dataloader):
# 清空 optimizer
optimizer.zero_grad()
# 获取数据
inputs, labels = batch_data
logits = new_net(inputs.to(device))
loss = criterion(logits, labels.to(device))
loss.backward()
optimizer.step()
total_hit += torch.sum(torch.argmax(logits, dim=1) == labels.to(device)).item()
total_num += len(inputs)
total_loss += loss.item() * len(inputs)
return total_loss / total_num, total_hit / total_num
batch_size = 64
cuda = True
prune_count = 3
prune_rate = 0.95
finetune_epochs = 5
# "cuda" only when GPUs are available.
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# 加载待剪枝模型架构
from student_model import StudentNet
class StudentNet(nn.Module):
def __init__(self, base=16, width_mult=1):
super(StudentNet, self).__init__()
multiplier = [1, 2, 4, 8, 16, 16, 16, 16]
bandwidth = [base * m for m in multiplier] # 每层输出的channel数量
for i in range(3, 7): # 对3/4/5/6层进行剪枝
bandwidth[i] = int(bandwidth[i] * width_mult)
self.cnn = nn.Sequential(
# 我们通常不会拆解第一个卷积
nn.Sequential(
nn.Conv2d(3, bandwidth[0], 3, 1, 1),
nn.BatchNorm2d(bandwidth[0]),
nn.ReLU6(),
nn.MaxPool2d(2, 2, 0),
),
# 接下来的每个Sequential都一样,所以只详细介绍接下来第一个Sequential
nn.Sequential(
# Depthwise Convolution
nn.Conv2d(bandwidth[0], bandwidth[0], 3, 1, 1, groups=bandwidth[0]),
# Batch Normalization
nn.BatchNorm2d(bandwidth[0]),
# 激活函数ReLU6可限制神经元激活值最小为0最大为6
# MobileNet系列都是使用ReLU6。以便于模型量化。
nn.ReLU6(),
# Pointwise Convolution,之后不需要再做ReLU,经验上Pointwise + ReLU效果都会变化。
nn.Conv2d(bandwidth[0], bandwidth[1], 1),
# 每过完一个Block就下采样
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[1], bandwidth[1], 3, 1, 1, groups=bandwidth[1]),
nn.BatchNorm2d(bandwidth[1]),
nn.ReLU6(),
nn.Conv2d(bandwidth[1], bandwidth[2], 1),
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[2], bandwidth[2], 3, 1, 1, groups=bandwidth[2]),
nn.BatchNorm2d(bandwidth[2]),
nn.ReLU6(),
nn.Conv2d(bandwidth[2], bandwidth[3], 1),
nn.MaxPool2d(2, 2, 0),
),
# 目前图片已经进行了多次下采样,所以就不再做MaxPool
nn.Sequential(
nn.Conv2d(bandwidth[3], bandwidth[3], 3, 1, 1, groups=bandwidth[3]),
nn.BatchNorm2d(bandwidth[3]),
nn.ReLU6(),
nn.Conv2d(bandwidth[3], bandwidth[4], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[4], bandwidth[4], 3, 1, 1, groups=bandwidth[4]),
nn.BatchNorm2d(bandwidth[4]),
nn.ReLU6(),
nn.Conv2d(bandwidth[4], bandwidth[5], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[5], bandwidth[5], 3, 1, 1, groups=bandwidth[5]),
nn.BatchNorm2d(bandwidth[5]),
nn.ReLU6(),
nn.Conv2d(bandwidth[5], bandwidth[6], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[6], bandwidth[6], 3, 1, 1, groups=bandwidth[6]),
nn.BatchNorm2d(bandwidth[6]),
nn.ReLU6(),
nn.Conv2d(bandwidth[6], bandwidth[7], 1),
),
# 如果输入图片大小不同,Global Average Pooling会把它们压成相同形状,这样接下来全连接层就不会出现输入尺寸不一致的错误
nn.AdaptiveAvgPool2d((1, 1)),
)
# 直接将CNN的输出映射到11维作为最终输出
self.fc = nn.Sequential(nn.Linear(bandwidth[7], 11))
def forward(self, x):
x = self.cnn(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
if __name__ == "__main__":
# 加载数据
trainTransform = transforms.Compose(
[
transforms.RandomCrop(256, pad_if_needed=True, padding_mode="symmetric"),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
]
)
testTransform = transforms.Compose(
[
transforms.CenterCrop(256),
transforms.ToTensor(),
]
)
# 注意修改下面的数据集路径
train_set = DatasetFolder(
"dataset/food-11/food-11/training/labeled",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=trainTransform,
)
valid_set = DatasetFolder(
"dataset/food-11/food-11/validation",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=testTransform,
)
train_dataloader = DataLoader(
train_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
valid_dataloader = DataLoader(
valid_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
print("Data Loaded")
# 加载网络
old_net = StudentNet(base=16)
old_net.load_state_dict(torch.load(f"./student_custom_small.bin"))
if cuda:
old_net = old_net.cuda(device)
# 开始剪枝并finetune:独立剪枝prune_count次,每次剪枝的剪枝率按prune_rate逐渐增大,剪枝后微调finetune_epochs个epoch
criterion = nn.CrossEntropyLoss()
optimizer = optim.AdamW(old_net.parameters(), lr=1e-3)
now_width_mult = 1
for i in range(prune_count):
now_width_mult *= prune_rate # 增大剪枝率
new_net = StudentNet(width_mult=now_width_mult)
if cuda:
new_net = new_net.cuda(device)
new_net = network_slimming(old_net, new_net)
now_best_acc = 0
for epoch in range(finetune_epochs):
new_net.train()
train_loss, train_acc = run_epoch(train_dataloader)
new_net.eval()
valid_loss, valid_acc = run_epoch(valid_dataloader)
# 每次剪枝时存下最好的model
if valid_acc > now_best_acc:
now_best_acc = valid_acc
torch.save(
new_net.state_dict(), f"./pruned_{now_width_mult}_student_model.bin"
)
print(
"rate {:6.4f} epoch {:>3d}: train loss: {:6.4f}, acc {:6.4f} valid loss: {:6.4f}, acc {:6.4f}".format(
now_width_mult, epoch, train_loss, train_acc, valid_loss, valid_acc
)
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/138/129138952.ipynb
| null | null |
[{"Id": 129138952, "ScriptId": 38387115, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15050561, "CreationDate": "05/11/2023 09:29:51", "VersionNumber": 1.0, "Title": "notebook24c67bfefa", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 233.0, "LinesInsertedFromPrevious": 233.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import torch
import os
import pickle
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.models as models
import torchvision.transforms as transforms
from PIL import Image
from torch.utils.data import Dataset, DataLoader
from torchvision.datasets import DatasetFolder
import torch.nn.utils.prune as prune
from tqdm.auto import tqdm
def network_slimming(old_model, new_model):
old_params = old_model.state_dict()
new_params = new_model.state_dict()
# 只保留每一层中的部分卷积核
selected_idx = []
for i in range(8): # 只对模型中CNN部分(8个Sequential)进行剪枝
gamma = old_params[f"cnn.{i}.1.weight"]
new_dim = len(new_params[f"cnn.{i}.1.weight"])
ranking = torch.argsort(gamma, descending=True)
selected_idx.append(ranking[:new_dim])
now_processing = 1 # 当前在处理哪一个Sequential,索引为0的Sequential不需处理
for param_name, weights in old_params.items():
# 如果是CNN层,则根据gamma仅复制部分参数;如果是FC层或者该参数只有一个数字(例如batchnorm的tracenum等等)就直接全部复制
if (
param_name.startswith("cnn")
and weights.size() != torch.Size([])
and now_processing != len(selected_idx)
):
# 当处理到Pointwise Convolution时,则代表正在处理的Sequential已处理完毕
if param_name.startswith(f"cnn.{now_processing}.3"):
now_processing += 1
# Pointwise Convolution的参数会受前一个Sequential和后一个Sequential剪枝情况的影响,因此需要特别处理
if param_name.endswith("3.weight"):
# 不需要删除最后一个Sequential中的Pointwise卷积核
if len(selected_idx) == now_processing:
# selected_idx[now_processing-1]指当前Sequential中保留的通道的索引
new_params[param_name] = weights[
:, selected_idx[now_processing - 1]
]
# 除了最后一个Sequential,每个Sequential中卷积核的数量(输出通道数)都要和后一个Sequential匹配。
else:
# Pointwise Convolution中Conv2d(x,y,1)的weight的形状是(y,x,1,1)
# selected_idx[now_processing]指后一个Sequential中保留的通道的索引
# selected_idx[now_processing-1]指当前Sequential中保留的通道的索引
new_params[param_name] = weights[selected_idx[now_processing]][
:, selected_idx[now_processing - 1]
]
else:
new_params[param_name] = weights[selected_idx[now_processing]]
else:
new_params[param_name] = weights
# 返回新模型
new_model.load_state_dict(new_params)
return new_model
def run_epoch(data_loader):
total_num, total_hit, total_loss = 0, 0, 0
for batch_data in tqdm(data_loader):
# for now_step, batch_data in enumerate(dataloader):
# 清空 optimizer
optimizer.zero_grad()
# 获取数据
inputs, labels = batch_data
logits = new_net(inputs.to(device))
loss = criterion(logits, labels.to(device))
loss.backward()
optimizer.step()
total_hit += torch.sum(torch.argmax(logits, dim=1) == labels.to(device)).item()
total_num += len(inputs)
total_loss += loss.item() * len(inputs)
return total_loss / total_num, total_hit / total_num
batch_size = 64
cuda = True
prune_count = 3
prune_rate = 0.95
finetune_epochs = 5
# "cuda" only when GPUs are available.
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# 加载待剪枝模型架构
from student_model import StudentNet
class StudentNet(nn.Module):
def __init__(self, base=16, width_mult=1):
super(StudentNet, self).__init__()
multiplier = [1, 2, 4, 8, 16, 16, 16, 16]
bandwidth = [base * m for m in multiplier] # 每层输出的channel数量
for i in range(3, 7): # 对3/4/5/6层进行剪枝
bandwidth[i] = int(bandwidth[i] * width_mult)
self.cnn = nn.Sequential(
# 我们通常不会拆解第一个卷积
nn.Sequential(
nn.Conv2d(3, bandwidth[0], 3, 1, 1),
nn.BatchNorm2d(bandwidth[0]),
nn.ReLU6(),
nn.MaxPool2d(2, 2, 0),
),
# 接下来的每个Sequential都一样,所以只详细介绍接下来第一个Sequential
nn.Sequential(
# Depthwise Convolution
nn.Conv2d(bandwidth[0], bandwidth[0], 3, 1, 1, groups=bandwidth[0]),
# Batch Normalization
nn.BatchNorm2d(bandwidth[0]),
# 激活函数ReLU6可限制神经元激活值最小为0最大为6
# MobileNet系列都是使用ReLU6。以便于模型量化。
nn.ReLU6(),
# Pointwise Convolution,之后不需要再做ReLU,经验上Pointwise + ReLU效果都会变化。
nn.Conv2d(bandwidth[0], bandwidth[1], 1),
# 每过完一个Block就下采样
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[1], bandwidth[1], 3, 1, 1, groups=bandwidth[1]),
nn.BatchNorm2d(bandwidth[1]),
nn.ReLU6(),
nn.Conv2d(bandwidth[1], bandwidth[2], 1),
nn.MaxPool2d(2, 2, 0),
),
nn.Sequential(
nn.Conv2d(bandwidth[2], bandwidth[2], 3, 1, 1, groups=bandwidth[2]),
nn.BatchNorm2d(bandwidth[2]),
nn.ReLU6(),
nn.Conv2d(bandwidth[2], bandwidth[3], 1),
nn.MaxPool2d(2, 2, 0),
),
# 目前图片已经进行了多次下采样,所以就不再做MaxPool
nn.Sequential(
nn.Conv2d(bandwidth[3], bandwidth[3], 3, 1, 1, groups=bandwidth[3]),
nn.BatchNorm2d(bandwidth[3]),
nn.ReLU6(),
nn.Conv2d(bandwidth[3], bandwidth[4], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[4], bandwidth[4], 3, 1, 1, groups=bandwidth[4]),
nn.BatchNorm2d(bandwidth[4]),
nn.ReLU6(),
nn.Conv2d(bandwidth[4], bandwidth[5], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[5], bandwidth[5], 3, 1, 1, groups=bandwidth[5]),
nn.BatchNorm2d(bandwidth[5]),
nn.ReLU6(),
nn.Conv2d(bandwidth[5], bandwidth[6], 1),
),
nn.Sequential(
nn.Conv2d(bandwidth[6], bandwidth[6], 3, 1, 1, groups=bandwidth[6]),
nn.BatchNorm2d(bandwidth[6]),
nn.ReLU6(),
nn.Conv2d(bandwidth[6], bandwidth[7], 1),
),
# 如果输入图片大小不同,Global Average Pooling会把它们压成相同形状,这样接下来全连接层就不会出现输入尺寸不一致的错误
nn.AdaptiveAvgPool2d((1, 1)),
)
# 直接将CNN的输出映射到11维作为最终输出
self.fc = nn.Sequential(nn.Linear(bandwidth[7], 11))
def forward(self, x):
x = self.cnn(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
if __name__ == "__main__":
# 加载数据
trainTransform = transforms.Compose(
[
transforms.RandomCrop(256, pad_if_needed=True, padding_mode="symmetric"),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
]
)
testTransform = transforms.Compose(
[
transforms.CenterCrop(256),
transforms.ToTensor(),
]
)
# 注意修改下面的数据集路径
train_set = DatasetFolder(
"dataset/food-11/food-11/training/labeled",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=trainTransform,
)
valid_set = DatasetFolder(
"dataset/food-11/food-11/validation",
loader=lambda x: Image.open(x),
extensions="jpg",
transform=testTransform,
)
train_dataloader = DataLoader(
train_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
valid_dataloader = DataLoader(
valid_set, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory=True
)
print("Data Loaded")
# 加载网络
old_net = StudentNet(base=16)
old_net.load_state_dict(torch.load(f"./student_custom_small.bin"))
if cuda:
old_net = old_net.cuda(device)
# 开始剪枝并finetune:独立剪枝prune_count次,每次剪枝的剪枝率按prune_rate逐渐增大,剪枝后微调finetune_epochs个epoch
criterion = nn.CrossEntropyLoss()
optimizer = optim.AdamW(old_net.parameters(), lr=1e-3)
now_width_mult = 1
for i in range(prune_count):
now_width_mult *= prune_rate # 增大剪枝率
new_net = StudentNet(width_mult=now_width_mult)
if cuda:
new_net = new_net.cuda(device)
new_net = network_slimming(old_net, new_net)
now_best_acc = 0
for epoch in range(finetune_epochs):
new_net.train()
train_loss, train_acc = run_epoch(train_dataloader)
new_net.eval()
valid_loss, valid_acc = run_epoch(valid_dataloader)
# 每次剪枝时存下最好的model
if valid_acc > now_best_acc:
now_best_acc = valid_acc
torch.save(
new_net.state_dict(), f"./pruned_{now_width_mult}_student_model.bin"
)
print(
"rate {:6.4f} epoch {:>3d}: train loss: {:6.4f}, acc {:6.4f} valid loss: {:6.4f}, acc {:6.4f}".format(
now_width_mult, epoch, train_loss, train_acc, valid_loss, valid_acc
)
)
| false | 0 | 2,793 | 0 | 2,793 | 2,793 |
||
129042489
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
df_train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
df_train
df_test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
df_test
df_train.drop("id", axis=1, inplace=True)
df_test.drop("id", axis=1, inplace=True)
df_train.info()
df_train.describe()
df_train.isnull().sum()
import sweetviz as sv
report = sv.analyze(df_train)
report.show_notebook()
plt.figure(figsize=(25, 30), facecolor="white")
pltno = 1
for column in df_train:
if pltno <= 18:
ax = plt.subplot(6, 3, pltno)
sns.histplot(x=column, data=df_train)
plt.xlabel(column)
plt.ylabel("Count")
pltno += 1
plt.tight_layout()
sns.pairplot(df_train)
fig, axes = plt.subplots(nrows=6, ncols=3, figsize=(15, 20))
axes = axes.flatten()
for i, columns in enumerate(df_train.columns):
sns.boxplot(x=df_train[columns], ax=axes[i])
for i in range(len(df_train.columns), len(axes)):
axes[i].axis("off")
plt.show()
corr = df_train.corr()
a = corr >= 0.9
sns.heatmap(corr[a], annot=True, cmap="Pastel1")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/042/129042489.ipynb
| null | null |
[{"Id": 129042489, "ScriptId": 38357032, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8217877, "CreationDate": "05/10/2023 14:15:05", "VersionNumber": 1.0, "Title": "notebookcfda4c9f15", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 76.0, "LinesInsertedFromPrevious": 76.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
df_train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv")
df_train
df_test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
df_test
df_train.drop("id", axis=1, inplace=True)
df_test.drop("id", axis=1, inplace=True)
df_train.info()
df_train.describe()
df_train.isnull().sum()
import sweetviz as sv
report = sv.analyze(df_train)
report.show_notebook()
plt.figure(figsize=(25, 30), facecolor="white")
pltno = 1
for column in df_train:
if pltno <= 18:
ax = plt.subplot(6, 3, pltno)
sns.histplot(x=column, data=df_train)
plt.xlabel(column)
plt.ylabel("Count")
pltno += 1
plt.tight_layout()
sns.pairplot(df_train)
fig, axes = plt.subplots(nrows=6, ncols=3, figsize=(15, 20))
axes = axes.flatten()
for i, columns in enumerate(df_train.columns):
sns.boxplot(x=df_train[columns], ax=axes[i])
for i in range(len(df_train.columns), len(axes)):
axes[i].axis("off")
plt.show()
corr = df_train.corr()
a = corr >= 0.9
sns.heatmap(corr[a], annot=True, cmap="Pastel1")
plt.show()
| false | 0 | 606 | 0 | 606 | 606 |
||
129042812
|
#!pip install opencv-python
import cv2
import warnings
import matplotlib.pyplot as plt
import numpy as np
warnings.filterwarnings("ignore")
cv2.__version__
plt.figure(figsize=(4, 2))
imagedata = plt.imread("../input/catimagedataset/cat.1.jpg")
plt.imshow(imagedata)
plt.grid(False)
plt.show()
print("Yes")
print("Image Shape:{}".format(imagedata.shape))
print(
"Image Size is : Image Height: {}, Image Width: {} and Image Channle: {} = {}".format(
imagedata.shape[0], imagedata.shape[1], imagedata.shape[2], imagedata.size
)
)
imagedata.shape[0]
def catimageShow(imageTitle, image):
imageVariable = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(4, 2))
plt.imshow(imageVariable)
plt.title(imageTitle)
plt.show()
catimageShow("This is a Orginal Cat Image", imagedata)
imagedata.shape[:2]
# mask Lider, Data Fusion
Image_mask = np.zeros(imagedata.shape[:2], dtype="uint8")
Image_mask
cv2.rectangle(Image_mask, (0, 450), (50, 200), 255)
catimageShow("Changed Image", Image_mask)
argmumentImage = {"Image": "../input/catimagedataset/cat.1.jpg"}
imagedata = plt.imread(argmumentImage["Image"])
catimageShow("orginal Image", imagedata)
bit_mask = cv2.bitwise_and(imagedata, imagedata, mask=Image_mask)
catimageShow("Bit masked Image", Image_mask)
cv2.circle(Image_mask, (145, 150), 120, 250, -1)
bit_mask = cv2.bitwise_and(imagedata, imagedata, mask=Image_mask)
catimageShow("Bit masked Image", Image_mask)
min(imagedata[0][0])
# Image Scalling
# Normalization
# Standarization
imagedata / 255
customValueW = 120.0 / imagedata.shape[1]
customValueH = 120.0 / imagedata.shape[0]
120 * 120
imageDimention = (120, int(imagedata.shape[0] * customValueW))
imageDimention
newIamge = cv2.resize(imagedata, imageDimention, interpolation=cv2.INTER_AREA)
catimageShow("Resized Image", newIamge)
newIamge.shape
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/042/129042812.ipynb
| null | null |
[{"Id": 129042812, "ScriptId": 38347525, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13930373, "CreationDate": "05/10/2023 14:17:20", "VersionNumber": 2.0, "Title": "Python OpenCV", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 79.0, "LinesInsertedFromPrevious": 61.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 18.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
#!pip install opencv-python
import cv2
import warnings
import matplotlib.pyplot as plt
import numpy as np
warnings.filterwarnings("ignore")
cv2.__version__
plt.figure(figsize=(4, 2))
imagedata = plt.imread("../input/catimagedataset/cat.1.jpg")
plt.imshow(imagedata)
plt.grid(False)
plt.show()
print("Yes")
print("Image Shape:{}".format(imagedata.shape))
print(
"Image Size is : Image Height: {}, Image Width: {} and Image Channle: {} = {}".format(
imagedata.shape[0], imagedata.shape[1], imagedata.shape[2], imagedata.size
)
)
imagedata.shape[0]
def catimageShow(imageTitle, image):
imageVariable = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(4, 2))
plt.imshow(imageVariable)
plt.title(imageTitle)
plt.show()
catimageShow("This is a Orginal Cat Image", imagedata)
imagedata.shape[:2]
# mask Lider, Data Fusion
Image_mask = np.zeros(imagedata.shape[:2], dtype="uint8")
Image_mask
cv2.rectangle(Image_mask, (0, 450), (50, 200), 255)
catimageShow("Changed Image", Image_mask)
argmumentImage = {"Image": "../input/catimagedataset/cat.1.jpg"}
imagedata = plt.imread(argmumentImage["Image"])
catimageShow("orginal Image", imagedata)
bit_mask = cv2.bitwise_and(imagedata, imagedata, mask=Image_mask)
catimageShow("Bit masked Image", Image_mask)
cv2.circle(Image_mask, (145, 150), 120, 250, -1)
bit_mask = cv2.bitwise_and(imagedata, imagedata, mask=Image_mask)
catimageShow("Bit masked Image", Image_mask)
min(imagedata[0][0])
# Image Scalling
# Normalization
# Standarization
imagedata / 255
customValueW = 120.0 / imagedata.shape[1]
customValueH = 120.0 / imagedata.shape[0]
120 * 120
imageDimention = (120, int(imagedata.shape[0] * customValueW))
imageDimention
newIamge = cv2.resize(imagedata, imageDimention, interpolation=cv2.INTER_AREA)
catimageShow("Resized Image", newIamge)
newIamge.shape
| false | 0 | 662 | 0 | 662 | 662 |
||
129035281
|
<jupyter_start><jupyter_text>Performance Data on Football teams 09 to 22
The dataset was collected from football data websites for the seasons between 09/10 to 21/22. The data contains season season stats and performance data, such as attributes related to passing or shooting. Most of the data relates to Europe's top 5 Leagues as well as the champions league. Any missing columns for certain attributes for some older seasons is due the data not being collected or available until later seasons. The attributes are self explanatory and the "key" was just used as a unique key to combine different files.
Kaggle dataset identifier: performance-data-on-football-teams-09-to-22
<jupyter_script># # 🧮 **Is it statistically easier to win the Premier League vs La Liga?**
# ## 👹 **numbers can sometimes be deceiving**
# ---
# via GIPHY
# ## 🪔 **Please Upvote my kernel if you liked it** ⬆️
# # 📋 **Table of content**
# ---
# ## **1. [Introduction](#introduction)**
# ## **2. [Body](#body)**
# > **2.1. [Change the theme of the notebook](#cttotn)**
# > **2.2. [Install and Import important libraries, Read the data](#iaiil)**
# > **2.3. [Preprocessing](#preprocessing)**
# > **2.4. [Null Hypothesis](#nullhypothesis)**
# > **2.5. [Check Normality](#checknormality)**
# > **2.6. [T-Test](#ttest)**
# > **2.7. [Result](#result)**
# ## **3. [Conclusion](#conclusion)**
# # 📖 **INTRODUCTION**
# ---
# ## About Dataset
# The dataset used for this analysis is the "Performance Data on Football Teams 09 to 22." This dataset encompasses a comprehensive range of performance statistics for football teams participating in the Premier League and La Liga during the period from 2009 to 2022. It includes data on various aspects of team performance, such as points earned, goals scored, goal differentials, and other relevant metrics.
# By utilizing this dataset, we were able to focus specifically on the points earned by the winners of each league during the specified timeframe. Points serve as a fundamental measure of success in football leagues, as they directly determine a team's position in the standings. Analyzing the average points earned by the league winners allowed us to draw meaningful comparisons between the Premier League and La Liga in terms of the difficulty of winning.
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # 🦴 **BODY**
# ---
# ## 🖌️ Change the theme of the notebook
# ---
#
from IPython.core.display import HTML
with open("./CSS.css", "r") as file:
custom_css = file.read()
HTML(custom_css)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# ## 🛠️ Install and Import important libraries, Read the data
# ---
#
import pandas as pd
import pingouin as pg
df = pd.read_csv(
"/kaggle/input/performance-data-on-football-teams-09-to-22/Complete Dataset 2.csv"
)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# ## ⚗️ Preprocessing
# ---
#
df.head()
df.tail()
df.describe()
df.dtypes
df.info()
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# Add start and end of the Season
df["Start Season"] = df["Season"].apply(lambda x: int(x[:4]))
df["End Season"] = df["Season"].apply(lambda x: (x[5:]))
# ## 🇭0 Null hypothesis: Winning the Premier League is more challenging than winning La Liga
# ---
# Select all Winners Premier League and La Liga from 2009-2022
pl = df[(df["League"] == "Premier League") & (df["Rank"] == 1)][
["League", "Team", "Start Season", "End Season", "Points", "Wins"]
].reset_index(drop=True)
ll = df[(df["League"] == "La Liga") & (df["Rank"] == 1)][
["League", "Team", "Start Season", "End Season", "Points", "Wins"]
].reset_index(drop=True)
# Average
print(
f"""La Liga Winners Average Points in 2009-2022 is -> {round(ll["Points"].mean(),2)}"""
)
print(
f"""Premier League Winners Average Points in 2009-2022 is -> {round(pl["Points"].mean(),2)}"""
)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # 🔔 Check normality
# ---
print(
f"""Premier League Winner Points in 2009-2022 are normal -> {pg.normality(pl.Points.values)["normal"].values[0]}"""
)
print(
f"""LaLiga Winner Points in 2009-2022 are normal -> {pg.normality(ll.Points.values)["normal"].values[0]}"""
)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # 🧪 T-Test
# ## 2009 - 2022
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # ⚽ Result
#
pg.ttest(ll.Points.values, pl.Points.values, correction=False, alternative="greater")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/035/129035281.ipynb
|
performance-data-on-football-teams-09-to-22
|
gurpreetsdeol
|
[{"Id": 129035281, "ScriptId": 38269250, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7867890, "CreationDate": "05/10/2023 13:20:48", "VersionNumber": 4.0, "Title": "easier-to-win-the-PremierLeague-vs-LaLiga", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 143.0, "LinesInsertedFromPrevious": 75.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 68.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184736827, "KernelVersionId": 129035281, "SourceDatasetVersionId": 4444384}]
|
[{"Id": 4444384, "DatasetId": 2602687, "DatasourceVersionId": 4504086, "CreatorUserId": 12219559, "LicenseName": "Unknown", "CreationDate": "11/03/2022 17:36:26", "VersionNumber": 1.0, "Title": "Performance Data on Football teams 09 to 22", "Slug": "performance-data-on-football-teams-09-to-22", "Subtitle": "Dataset for teams in the top 5 leagues for seasons between 09/10 and 21/22", "Description": "The dataset was collected from football data websites for the seasons between 09/10 to 21/22. The data contains season season stats and performance data, such as attributes related to passing or shooting. Most of the data relates to Europe's top 5 Leagues as well as the champions league. Any missing columns for certain attributes for some older seasons is due the data not being collected or available until later seasons. The attributes are self explanatory and the \"key\" was just used as a unique key to combine different files.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2602687, "CreatorUserId": 12219559, "OwnerUserId": 12219559.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 4444384.0, "CurrentDatasourceVersionId": 4504086.0, "ForumId": 2633185, "Type": 2, "CreationDate": "11/03/2022 17:36:26", "LastActivityDate": "11/03/2022", "TotalViews": 2168, "TotalDownloads": 309, "TotalVotes": 8, "TotalKernels": 3}]
|
[{"Id": 12219559, "UserName": "gurpreetsdeol", "DisplayName": "Gurpreet Deol", "RegisterDate": "11/03/2022", "PerformanceTier": 0}]
|
# # 🧮 **Is it statistically easier to win the Premier League vs La Liga?**
# ## 👹 **numbers can sometimes be deceiving**
# ---
# via GIPHY
# ## 🪔 **Please Upvote my kernel if you liked it** ⬆️
# # 📋 **Table of content**
# ---
# ## **1. [Introduction](#introduction)**
# ## **2. [Body](#body)**
# > **2.1. [Change the theme of the notebook](#cttotn)**
# > **2.2. [Install and Import important libraries, Read the data](#iaiil)**
# > **2.3. [Preprocessing](#preprocessing)**
# > **2.4. [Null Hypothesis](#nullhypothesis)**
# > **2.5. [Check Normality](#checknormality)**
# > **2.6. [T-Test](#ttest)**
# > **2.7. [Result](#result)**
# ## **3. [Conclusion](#conclusion)**
# # 📖 **INTRODUCTION**
# ---
# ## About Dataset
# The dataset used for this analysis is the "Performance Data on Football Teams 09 to 22." This dataset encompasses a comprehensive range of performance statistics for football teams participating in the Premier League and La Liga during the period from 2009 to 2022. It includes data on various aspects of team performance, such as points earned, goals scored, goal differentials, and other relevant metrics.
# By utilizing this dataset, we were able to focus specifically on the points earned by the winners of each league during the specified timeframe. Points serve as a fundamental measure of success in football leagues, as they directly determine a team's position in the standings. Analyzing the average points earned by the league winners allowed us to draw meaningful comparisons between the Premier League and La Liga in terms of the difficulty of winning.
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # 🦴 **BODY**
# ---
# ## 🖌️ Change the theme of the notebook
# ---
#
from IPython.core.display import HTML
with open("./CSS.css", "r") as file:
custom_css = file.read()
HTML(custom_css)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# ## 🛠️ Install and Import important libraries, Read the data
# ---
#
import pandas as pd
import pingouin as pg
df = pd.read_csv(
"/kaggle/input/performance-data-on-football-teams-09-to-22/Complete Dataset 2.csv"
)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# ## ⚗️ Preprocessing
# ---
#
df.head()
df.tail()
df.describe()
df.dtypes
df.info()
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# Add start and end of the Season
df["Start Season"] = df["Season"].apply(lambda x: int(x[:4]))
df["End Season"] = df["Season"].apply(lambda x: (x[5:]))
# ## 🇭0 Null hypothesis: Winning the Premier League is more challenging than winning La Liga
# ---
# Select all Winners Premier League and La Liga from 2009-2022
pl = df[(df["League"] == "Premier League") & (df["Rank"] == 1)][
["League", "Team", "Start Season", "End Season", "Points", "Wins"]
].reset_index(drop=True)
ll = df[(df["League"] == "La Liga") & (df["Rank"] == 1)][
["League", "Team", "Start Season", "End Season", "Points", "Wins"]
].reset_index(drop=True)
# Average
print(
f"""La Liga Winners Average Points in 2009-2022 is -> {round(ll["Points"].mean(),2)}"""
)
print(
f"""Premier League Winners Average Points in 2009-2022 is -> {round(pl["Points"].mean(),2)}"""
)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # 🔔 Check normality
# ---
print(
f"""Premier League Winner Points in 2009-2022 are normal -> {pg.normality(pl.Points.values)["normal"].values[0]}"""
)
print(
f"""LaLiga Winner Points in 2009-2022 are normal -> {pg.normality(ll.Points.values)["normal"].values[0]}"""
)
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # 🧪 T-Test
# ## 2009 - 2022
# **[⬆️Back to Table of Contents ⬆️](#toc)**
# # ⚽ Result
#
pg.ttest(ll.Points.values, pl.Points.values, correction=False, alternative="greater")
| false | 1 | 1,253 | 0 | 1,420 | 1,253 |
||
129035267
|
<jupyter_start><jupyter_text>Sports Image Dataset
###Context
For Masters Project
### Content
It consists of 22 folders with label name as sports name. Each folder consists of around 800-900 images. This dataset is collected from Google Images using Images Scrapper.
Kaggle dataset identifier: sports-image-dataset
<jupyter_script># # Straight Line Detection
# https://www.kaggle.com/stpeteishii/straight-line-detection
# https://www.kaggle.com/stpeteishii/circle-detection
# ## cv2.Canny
# https://docs.opencv.org/3.4/da/d22/tutorial_py_canny.html
import os
import cv2
import matplotlib.pyplot as plt
import numpy as np
# paths=[]
# for dirname, _, filenames in os.walk('/kaggle/input/sports-image-dataset/data/badminton'):
# for filename in filenames:
# if filename[-4:]!='.csv':
# paths+=[(os.path.join(dirname, filename))]
path = "/kaggle/input/sports-image-dataset/data/badminton/00000052.jpg"
img = cv2.imread(path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.axis("off")
plt.show()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
plt.axis("off")
plt.show()
canny = cv2.Canny(gray, 100, 200)
plt.imshow(canny)
plt.axis("off")
plt.show()
edges = cv2.Canny(gray, 100, 200, apertureSize=3)
lines = cv2.HoughLines(edges, rho=1, theta=np.pi / 180, threshold=150)
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * (a))
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * (a))
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 0), 3)
plt.imshow(img)
plt.axis("off")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/035/129035267.ipynb
|
sports-image-dataset
|
rishikeshkonapure
|
[{"Id": 129035267, "ScriptId": 38351610, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 2648923, "CreationDate": "05/10/2023 13:20:44", "VersionNumber": 4.0, "Title": "Line Detection", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 58.0, "LinesInsertedFromPrevious": 1.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 57.0, "LinesInsertedFromFork": 46.0, "LinesDeletedFromFork": 41.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 12.0, "TotalVotes": 0}]
|
[{"Id": 184736755, "KernelVersionId": 129035267, "SourceDatasetVersionId": 1381286}]
|
[{"Id": 1381286, "DatasetId": 805924, "DatasourceVersionId": 1413867, "CreatorUserId": 1755651, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "07/29/2020 19:05:52", "VersionNumber": 1.0, "Title": "Sports Image Dataset", "Slug": "sports-image-dataset", "Subtitle": "Collection of images for 22 types of sports", "Description": "###Context\nFor Masters Project\n\n### Content\nIt consists of 22 folders with label name as sports name. Each folder consists of around 800-900 images. This dataset is collected from Google Images using Images Scrapper.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 805924, "CreatorUserId": 1755651, "OwnerUserId": 1755651.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1381286.0, "CurrentDatasourceVersionId": 1413867.0, "ForumId": 820995, "Type": 2, "CreationDate": "07/29/2020 19:05:52", "LastActivityDate": "07/29/2020", "TotalViews": 6884, "TotalDownloads": 481, "TotalVotes": 14, "TotalKernels": 6}]
|
[{"Id": 1755651, "UserName": "rishikeshkonapure", "DisplayName": "Rushikesh Konapure", "RegisterDate": "03/25/2018", "PerformanceTier": 2}]
|
# # Straight Line Detection
# https://www.kaggle.com/stpeteishii/straight-line-detection
# https://www.kaggle.com/stpeteishii/circle-detection
# ## cv2.Canny
# https://docs.opencv.org/3.4/da/d22/tutorial_py_canny.html
import os
import cv2
import matplotlib.pyplot as plt
import numpy as np
# paths=[]
# for dirname, _, filenames in os.walk('/kaggle/input/sports-image-dataset/data/badminton'):
# for filename in filenames:
# if filename[-4:]!='.csv':
# paths+=[(os.path.join(dirname, filename))]
path = "/kaggle/input/sports-image-dataset/data/badminton/00000052.jpg"
img = cv2.imread(path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.axis("off")
plt.show()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
plt.axis("off")
plt.show()
canny = cv2.Canny(gray, 100, 200)
plt.imshow(canny)
plt.axis("off")
plt.show()
edges = cv2.Canny(gray, 100, 200, apertureSize=3)
lines = cv2.HoughLines(edges, rho=1, theta=np.pi / 180, threshold=150)
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * (a))
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * (a))
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 0), 3)
plt.imshow(img)
plt.axis("off")
plt.show()
| false | 0 | 574 | 0 | 652 | 574 |
||
129035325
|
<jupyter_start><jupyter_text>Wild blueberry Yield Prediction Dataset
### Context
Blueberries are perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Vaccinium also includes cranberries, bilberries, huckleberries, and Madeira blueberries. Commercial blueberries—both wild (lowbush) and cultivated (highbush)—are all native to North America. The highbush varieties were introduced into Europe during the 1930s.
Blueberries are usually prostrate shrubs that can vary in size from 10 centimeters (4 inches) to 4 meters (13 feet) in height. In the commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as "lowbush blueberries" (synonymous with "wild"), while the species with larger berries growing on taller, cultivated bushes are known as "highbush blueberries". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world s supply of highbush blueberries.
### Content
"The dataset used for predictive modeling was generated by the Wild Blueberry Pollination Simulation Model, which is an open-source, spatially-explicit computer simulation program that enables exploration of how various factors, including plant spatial arrangement, outcrossing and self-pollination, bee species compositions and weather conditions, in isolation and combination, affect pollination efficiency and yield of the wild blueberry agroecosystem. The simulation model has been validated by the field observation and experimental data collected in Maine USA and Canadian Maritimes during the last 30 years and now is a useful tool for hypothesis testing and theory development for wild blueberry pollination researches."
Features Unit Description
Clonesize m2 The average blueberry clone size in the field
Honeybee bees/m2/min Honeybee density in the field
Bumbles bees/m2/min Bumblebee density in the field
Andrena bees/m2/min Andrena bee density in the field
Osmia bees/m2/min Osmia bee density in the field
MaxOfUpperTRange ℃ The highest record of the upper band daily air temperature during the bloom season
MinOfUpperTRange ℃ The lowest record of the upper band daily air temperature
AverageOfUpperTRange ℃ The average of the upper band daily air temperature
MaxOfLowerTRange ℃ The highest record of the lower band daily air temperature
MinOfLowerTRange ℃ The lowest record of the lower band daily air temperature
AverageOfLowerTRange ℃ The average of the lower band daily air temperature
RainingDays Day The total number of days during the bloom season, each of which has precipitation larger than zero
AverageRainingDays Day The average of raining days of the entire bloom season
Kaggle dataset identifier: wild-blueberry-yield-prediction-dataset
<jupyter_script># ### WELCOME TO PS S3 14
# **AUTHOR - SUJAY KAPADNIS**
# **DATE - 10/05/2023**
# ### Context
# Blueberries are perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Vaccinium also includes cranberries, bilberries, huckleberries, and Madeira blueberries. Commercial blueberries—both wild (lowbush) and cultivated (highbush)—are all native to North America. The highbush varieties were introduced into Europe during the 1930s.
# Blueberries are usually prostrate shrubs that can vary in size from 10 centimeters (4 inches) to 4 meters (13 feet) in height. In the commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as "lowbush blueberries" (synonymous with "wild"), while the species with larger berries growing on taller, cultivated bushes are known as "highbush blueberries". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world s supply of highbush blueberries.
# ### Data Description
# "The dataset used for predictive modeling was generated by the Wild Blueberry Pollination Simulation Model, which is an open-source, spatially-explicit computer simulation program that enables exploration of how various factors, including plant spatial arrangement, outcrossing and self-pollination, bee species compositions and weather conditions, in isolation and combination, affect pollination efficiency and yield of the wild blueberry agroecosystem. The simulation model has been validated by the field observation and experimental data collected in Maine USA and Canadian Maritimes during the last 30 years and now is a useful tool for hypothesis testing and theory development for wild blueberry pollination researches."
# Features Unit Description
# Clonesize m2 The average blueberry clone size in the field
# Honeybee bees/m2/min Honeybee density in the field
# Bumbles bees/m2/min Bumblebee density in the field
# Andrena bees/m2/min Andrena bee density in the field
# Osmia bees/m2/min Osmia bee density in the field
# MaxOfUpperTRange ℃ The highest record of the upper band daily air temperature during the bloom season
# MinOfUpperTRange ℃ The lowest record of the upper band daily air temperature
# AverageOfUpperTRange ℃ The average of the upper band daily air temperature
# MaxOfLowerTRange ℃ The highest record of the lower band daily air temperature
# MinOfLowerTRange ℃ The lowest record of the lower band daily air temperature
# AverageOfLowerTRange ℃ The average of the lower band daily air temperature
# RainingDays Day The total number of days during the bloom season, each of which has precipitation larger than zero
# AverageRainingDays Day The average of raining days of the entire bloom season
# ### Evaluation metric
# # Evaluation metrics mentioned in this competition.
# Submissions are evaluated according to the **Mean Absolute Error(MAE)**
# ### Sample Submission
# id,yield
# 15289,6025.194
# 15290,1256.223
# 15291,357.44
# ### This is work in Progress, if you find this helpful kindly upvote
# ### LOAD YOUR DEPENDENCIES
# install nb_black for autoformating
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
from sklearn.model_selection import train_test_split, KFold
import optuna
from xgboost import XGBRegressor
from catboost import CatBoostRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_absolute_error
from colorama import Style, Fore
red = Style.BRIGHT + Fore.RED
blu = Style.BRIGHT + Fore.BLUE
mgt = Style.BRIGHT + Fore.MAGENTA
grn = Style.BRIGHT + Fore.GREEN
gld = Style.BRIGHT + Fore.YELLOW
res = Style.RESET_ALL
TARGET = "yield"
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv", index_col="id")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv", index_col="id")
submission = pd.read_csv("/kaggle/input/playground-series-s3e14/sample_submission.csv")
origin = pd.read_csv(
"/kaggle/input/wild-blueberry-yield-prediction-dataset/WildBlueberryPollinationSimulationData.csv",
).drop("Row#", axis=1)
def info(train):
display(train.head())
print("*" * 50)
print(f"SHAPE OF THE DATA: {train.shape}")
print("*" * 50)
if ~(train.isnull().sum().sum()):
print("NO NULL VALUES FOUND!")
else:
print(f"NULL VALUES: {train.isnull().sum()}")
print(train.info())
info(train)
info(test)
info(origin)
train.describe().T
# ### FEATURE DISTRIBUTION
# ### Train - Test Overlap
cont_col = [i for (i, j) in zip(test.columns, test.dtypes) if j in ["int", "float"]]
fig, axes = plt.subplots(4, 4, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=test, x=cont_col[i], color="#00BFC4", label="Test", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
# ### Train - Original Overlap
fig, axes = plt.subplots(4, 4, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=origin, x=cont_col[i], color="#00BFC4", label="Original", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
#
# 📌 Insight:
#
# 1. Train and Test Dataset follow very close distribution
# 2. Original Dataset and Train Dataset(Synthetic Dataset) are not very close, we can say that they are not on the same scale.
# 3. `honeybee distribution and bumbles distribution has some outliers
# ### Remove Highly related features
train = train.drop(
[
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"AverageRainingDays",
],
axis=1,
)
test = test.drop(
[
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"AverageRainingDays",
],
axis=1,
)
cont_col = [i for (i, j) in zip(test.columns, test.dtypes) if j in ["int", "float"]]
fig, axes = plt.subplots(3, 3, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=test, x=cont_col[i], color="#00BFC4", label="Test", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
cat_feat = list()
for col in train.columns:
print(col)
if train[col].nunique() < 20:
cat_feat.append(col)
print(train[col].value_counts())
print(train.shape)
for col in test.columns:
print(col)
if test[col].nunique() < 20:
print(test[col].value_counts())
print(test.shape)
def outlier_removal(dataframe, features):
for feature_name in features:
Q1 = dataframe[feature_name].quantile(0.25)
Q3 = dataframe[feature_name].quantile(0.75)
IQR = Q3 - Q1
dataframe = dataframe[
(dataframe[feature_name] >= Q1 - 1.5 * IQR)
& (dataframe[feature_name] <= Q3 + 1.5 * IQR)
]
return dataframe
features = train.columns
train = outlier_removal(train, features)
for col in cat_feat:
q1 = test[col].quantile(0.25)
q3 = test[col].quantile(0.75)
iqr = q3 - q1
lower_bound = q1 - 1.5 * iqr
upper_bound = q3 + 1.5 * iqr
mean_value = test[col].mean()
test.loc[(test[col] < lower_bound) | (test[col] > upper_bound), col] = mean_value
cont_col = [i for (i, j) in zip(test.columns, test.dtypes) if j in ["int", "float"]]
fig, axes = plt.subplots(3, 3, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=test, x=cont_col[i], color="#00BFC4", label="Test", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
# train_len = train.shape[0]
# y = train[TARGET]
# df = pd.concat([train.drop(TARGET,axis=1),test])
# df2 = pd.get_dummies(df,columns = cat_feat)
# df2.head()
# train = df2.iloc[:train_len,:]
# test = df2.iloc[train_len:,:]
# print(train.shape,test.shape)
# def objective(trial,data=train,target=y):
# train_x, test_x, train_y, test_y = train_test_split(data, target, test_size=0.15,random_state=42)
# param = {
# 'metric': 'rmse',
# 'random_state': 48,
# 'n_estimators': 20000,
# 'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-3, 10.0),
# 'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-3, 10.0),
# 'colsample_bytree': trial.suggest_categorical('colsample_bytree', [0.3,0.4,0.5,0.6,0.7,0.8,0.9, 1.0]),
# 'subsample': trial.suggest_categorical('subsample', [0.4,0.5,0.6,0.7,0.8,1.0]),
# 'learning_rate': trial.suggest_categorical('learning_rate', [0.006,0.008,0.01,0.014,0.017,0.02]),
# 'max_depth': trial.suggest_categorical('max_depth', [10,20,100]),
# 'num_leaves' : trial.suggest_int('num_leaves', 1, 1000),
# 'min_child_samples': trial.suggest_int('min_child_samples', 1, 300),
# 'cat_smooth' : trial.suggest_int('min_data_per_groups', 1, 100)
# }
# model = LGBMRegressor(**param)
# model.fit(train_x,train_y,eval_set=[(test_x,test_y)],early_stopping_rounds=100,verbose=False)
# preds = model.predict(test_x)
# mae = mean_absolute_error(test_y, preds)
# return mae
# study = optuna.create_study(direction='minimize')
# study.optimize(objective, n_trials=100)
# print('Number of finished trials:', len(study.trials))
# print('Best trial:', study.best_trial.params)
# Best_trial = study.best_trial.params
# Best_trial["n_estimators"], Best_trial["tree_method"] = 10000, 'gpu_hist'
# Best_trial
y = train[TARGET]
train.drop(TARGET, axis=1, inplace=True)
xgb_param = {
"lambda": 0.07630413570126751,
"alpha": 0.3487451222774151,
"colsample_bytree": 1.0,
"subsample": 0.4,
"learning_rate": 0.014,
"max_depth": 7,
"random_state": 2023,
"min_child_weight": 92,
"n_estimators": 10000,
"tree_method": "gpu_hist",
}
cat_param = {
"l2_leaf_reg": 0.002343598793171317,
"max_bin": 236,
"learning_rate": 0.009198430945898738,
"max_depth": 15,
"random_state": 2020,
"min_data_in_leaf": 135,
"n_estimators": 10000,
}
lgbm_param = {
"reg_alpha": 0.0015354651359190611,
"reg_lambda": 2.346151892673244,
"colsample_bytree": 1.0,
"subsample": 0.8,
"learning_rate": 0.02,
"max_depth": 10,
"num_leaves": 119,
"min_child_samples": 188,
# 'min_data_per_groups': 7,
"n_estimators": 10000,
# 'tree_method': 'gpu_hist'
}
preds = list()
kf = KFold(n_splits=10, random_state=48, shuffle=True)
for idx, (trn_idx, test_idx) in enumerate(kf.split(train, y)):
X_tr, X_val = train.iloc[trn_idx], train.iloc[test_idx]
y_tr, y_val = y.iloc[trn_idx], y.iloc[test_idx]
model = XGBRegressor(**xgb_param)
model.fit(
X_tr, y_tr, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=False
)
preds.append(model.predict(test))
mae = mean_absolute_error(y_val, model.predict(X_val))
print(f"Fold {idx+1}: MAE: {mae}")
xgb_pred = np.mean(preds, axis=0)
preds = list()
kf = KFold(n_splits=2, random_state=48, shuffle=True)
for idx, (trn_idx, test_idx) in enumerate(kf.split(train, y)):
X_tr, X_val = train.iloc[trn_idx], train.iloc[test_idx]
y_tr, y_val = y.iloc[trn_idx], y.iloc[test_idx]
model = CatBoostRegressor(**cat_param)
model.fit(
X_tr, y_tr, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=False
)
preds.append(model.predict(test))
mae = mean_absolute_error(y_val, model.predict(X_val))
print(f"Fold {idx+1}: MAE: {mae}")
cat_pred = np.mean(preds, axis=0)
preds = list()
kf = KFold(n_splits=10, random_state=48, shuffle=True)
for idx, (trn_idx, test_idx) in enumerate(kf.split(train, y)):
X_tr, X_val = train.iloc[trn_idx], train.iloc[test_idx]
y_tr, y_val = y.iloc[trn_idx], y.iloc[test_idx]
model = LGBMRegressor(**lgbm_param)
model.fit(
X_tr, y_tr, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=False
)
preds.append(model.predict(test))
mae = mean_absolute_error(y_val, model.predict(X_val))
print(f"Fold {idx+1}: MAE: {mae}")
lgbm_pred = np.mean(preds, axis=0)
prediction = (xgb_pred + cat_pred + lgbm_pred) / 3
sub = submission.copy()
sub["yield"] = prediction
sub.head()
sub.to_csv("Output.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/035/129035325.ipynb
|
wild-blueberry-yield-prediction-dataset
|
shashwatwork
|
[{"Id": 129035325, "ScriptId": 38338120, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8399766, "CreationDate": "05/10/2023 13:21:12", "VersionNumber": 3.0, "Title": "PS S3E14 BASELINE | FD", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 392.0, "LinesInsertedFromPrevious": 13.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 379.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184736975, "KernelVersionId": 129035325, "SourceDatasetVersionId": 2462316}]
|
[{"Id": 2462316, "DatasetId": 1490445, "DatasourceVersionId": 2504743, "CreatorUserId": 1444085, "LicenseName": "Attribution 4.0 International (CC BY 4.0)", "CreationDate": "07/25/2021 17:48:21", "VersionNumber": 2.0, "Title": "Wild blueberry Yield Prediction Dataset", "Slug": "wild-blueberry-yield-prediction-dataset", "Subtitle": "Predict the yield of Wild Blueberry", "Description": "### Context\n\nBlueberries are perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Vaccinium also includes cranberries, bilberries, huckleberries, and Madeira blueberries. Commercial blueberries\u2014both wild (lowbush) and cultivated (highbush)\u2014are all native to North America. The highbush varieties were introduced into Europe during the 1930s.\n\nBlueberries are usually prostrate shrubs that can vary in size from 10 centimeters (4 inches) to 4 meters (13 feet) in height. In the commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as \"lowbush blueberries\" (synonymous with \"wild\"), while the species with larger berries growing on taller, cultivated bushes are known as \"highbush blueberries\". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world s supply of highbush blueberries.\n\n### Content\n\n\"The dataset used for predictive modeling was generated by the Wild Blueberry Pollination Simulation Model, which is an open-source, spatially-explicit computer simulation program that enables exploration of how various factors, including plant spatial arrangement, outcrossing and self-pollination, bee species compositions and weather conditions, in isolation and combination, affect pollination efficiency and yield of the wild blueberry agroecosystem. The simulation model has been validated by the field observation and experimental data collected in Maine USA and Canadian Maritimes during the last 30 years and now is a useful tool for hypothesis testing and theory development for wild blueberry pollination researches.\"\n\nFeatures \tUnit\tDescription\nClonesize\tm2\tThe average blueberry clone size in the field\nHoneybee\tbees/m2/min\tHoneybee density in the field\nBumbles\tbees/m2/min\tBumblebee density in the field\nAndrena\tbees/m2/min\tAndrena bee density in the field\nOsmia\tbees/m2/min\tOsmia bee density in the field\nMaxOfUpperTRange\t\u2103\tThe highest record of the upper band daily air temperature during the bloom season\nMinOfUpperTRange\t\u2103\tThe lowest record of the upper band daily air temperature\nAverageOfUpperTRange\t\u2103\tThe average of the upper band daily air temperature\nMaxOfLowerTRange\t\u2103\tThe highest record of the lower band daily air temperature\nMinOfLowerTRange\t\u2103\tThe lowest record of the lower band daily air temperature\nAverageOfLowerTRange\t\u2103\tThe average of the lower band daily air temperature\nRainingDays\tDay\tThe total number of days during the bloom season, each of which has precipitation larger than zero\nAverageRainingDays\tDay\tThe average of raining days of the entire bloom season\n\n### Acknowledgements\n\nQu, Hongchun; Obsie, Efrem; Drummond, Frank (2020), \u201cData for: Wild blueberry yield prediction using a combination of computer simulation and machine learning algorithms\u201d, Mendeley Data, V1, doi: 10.17632/p5hvjzsvn8.1\n\nDataset is outsourced from [here.](https://data.mendeley.com/datasets/p5hvjzsvn8/1)", "VersionNotes": "updated", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1490445, "CreatorUserId": 1444085, "OwnerUserId": 1444085.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2462316.0, "CurrentDatasourceVersionId": 2504743.0, "ForumId": 1510148, "Type": 2, "CreationDate": "07/25/2021 17:47:00", "LastActivityDate": "07/25/2021", "TotalViews": 11876, "TotalDownloads": 1130, "TotalVotes": 48, "TotalKernels": 82}]
|
[{"Id": 1444085, "UserName": "shashwatwork", "DisplayName": "Shashwat Tiwari", "RegisterDate": "11/24/2017", "PerformanceTier": 2}]
|
# ### WELCOME TO PS S3 14
# **AUTHOR - SUJAY KAPADNIS**
# **DATE - 10/05/2023**
# ### Context
# Blueberries are perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Vaccinium also includes cranberries, bilberries, huckleberries, and Madeira blueberries. Commercial blueberries—both wild (lowbush) and cultivated (highbush)—are all native to North America. The highbush varieties were introduced into Europe during the 1930s.
# Blueberries are usually prostrate shrubs that can vary in size from 10 centimeters (4 inches) to 4 meters (13 feet) in height. In the commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as "lowbush blueberries" (synonymous with "wild"), while the species with larger berries growing on taller, cultivated bushes are known as "highbush blueberries". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world s supply of highbush blueberries.
# ### Data Description
# "The dataset used for predictive modeling was generated by the Wild Blueberry Pollination Simulation Model, which is an open-source, spatially-explicit computer simulation program that enables exploration of how various factors, including plant spatial arrangement, outcrossing and self-pollination, bee species compositions and weather conditions, in isolation and combination, affect pollination efficiency and yield of the wild blueberry agroecosystem. The simulation model has been validated by the field observation and experimental data collected in Maine USA and Canadian Maritimes during the last 30 years and now is a useful tool for hypothesis testing and theory development for wild blueberry pollination researches."
# Features Unit Description
# Clonesize m2 The average blueberry clone size in the field
# Honeybee bees/m2/min Honeybee density in the field
# Bumbles bees/m2/min Bumblebee density in the field
# Andrena bees/m2/min Andrena bee density in the field
# Osmia bees/m2/min Osmia bee density in the field
# MaxOfUpperTRange ℃ The highest record of the upper band daily air temperature during the bloom season
# MinOfUpperTRange ℃ The lowest record of the upper band daily air temperature
# AverageOfUpperTRange ℃ The average of the upper band daily air temperature
# MaxOfLowerTRange ℃ The highest record of the lower band daily air temperature
# MinOfLowerTRange ℃ The lowest record of the lower band daily air temperature
# AverageOfLowerTRange ℃ The average of the lower band daily air temperature
# RainingDays Day The total number of days during the bloom season, each of which has precipitation larger than zero
# AverageRainingDays Day The average of raining days of the entire bloom season
# ### Evaluation metric
# # Evaluation metrics mentioned in this competition.
# Submissions are evaluated according to the **Mean Absolute Error(MAE)**
# ### Sample Submission
# id,yield
# 15289,6025.194
# 15290,1256.223
# 15291,357.44
# ### This is work in Progress, if you find this helpful kindly upvote
# ### LOAD YOUR DEPENDENCIES
# install nb_black for autoformating
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
from sklearn.model_selection import train_test_split, KFold
import optuna
from xgboost import XGBRegressor
from catboost import CatBoostRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_absolute_error
from colorama import Style, Fore
red = Style.BRIGHT + Fore.RED
blu = Style.BRIGHT + Fore.BLUE
mgt = Style.BRIGHT + Fore.MAGENTA
grn = Style.BRIGHT + Fore.GREEN
gld = Style.BRIGHT + Fore.YELLOW
res = Style.RESET_ALL
TARGET = "yield"
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv", index_col="id")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv", index_col="id")
submission = pd.read_csv("/kaggle/input/playground-series-s3e14/sample_submission.csv")
origin = pd.read_csv(
"/kaggle/input/wild-blueberry-yield-prediction-dataset/WildBlueberryPollinationSimulationData.csv",
).drop("Row#", axis=1)
def info(train):
display(train.head())
print("*" * 50)
print(f"SHAPE OF THE DATA: {train.shape}")
print("*" * 50)
if ~(train.isnull().sum().sum()):
print("NO NULL VALUES FOUND!")
else:
print(f"NULL VALUES: {train.isnull().sum()}")
print(train.info())
info(train)
info(test)
info(origin)
train.describe().T
# ### FEATURE DISTRIBUTION
# ### Train - Test Overlap
cont_col = [i for (i, j) in zip(test.columns, test.dtypes) if j in ["int", "float"]]
fig, axes = plt.subplots(4, 4, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=test, x=cont_col[i], color="#00BFC4", label="Test", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
# ### Train - Original Overlap
fig, axes = plt.subplots(4, 4, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=origin, x=cont_col[i], color="#00BFC4", label="Original", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
#
# 📌 Insight:
#
# 1. Train and Test Dataset follow very close distribution
# 2. Original Dataset and Train Dataset(Synthetic Dataset) are not very close, we can say that they are not on the same scale.
# 3. `honeybee distribution and bumbles distribution has some outliers
# ### Remove Highly related features
train = train.drop(
[
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"AverageRainingDays",
],
axis=1,
)
test = test.drop(
[
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"AverageOfLowerTRange",
"AverageRainingDays",
],
axis=1,
)
cont_col = [i for (i, j) in zip(test.columns, test.dtypes) if j in ["int", "float"]]
fig, axes = plt.subplots(3, 3, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=test, x=cont_col[i], color="#00BFC4", label="Test", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
cat_feat = list()
for col in train.columns:
print(col)
if train[col].nunique() < 20:
cat_feat.append(col)
print(train[col].value_counts())
print(train.shape)
for col in test.columns:
print(col)
if test[col].nunique() < 20:
print(test[col].value_counts())
print(test.shape)
def outlier_removal(dataframe, features):
for feature_name in features:
Q1 = dataframe[feature_name].quantile(0.25)
Q3 = dataframe[feature_name].quantile(0.75)
IQR = Q3 - Q1
dataframe = dataframe[
(dataframe[feature_name] >= Q1 - 1.5 * IQR)
& (dataframe[feature_name] <= Q3 + 1.5 * IQR)
]
return dataframe
features = train.columns
train = outlier_removal(train, features)
for col in cat_feat:
q1 = test[col].quantile(0.25)
q3 = test[col].quantile(0.75)
iqr = q3 - q1
lower_bound = q1 - 1.5 * iqr
upper_bound = q3 + 1.5 * iqr
mean_value = test[col].mean()
test.loc[(test[col] < lower_bound) | (test[col] > upper_bound), col] = mean_value
cont_col = [i for (i, j) in zip(test.columns, test.dtypes) if j in ["int", "float"]]
fig, axes = plt.subplots(3, 3, figsize=(30, 20))
for i, ax in enumerate(axes.flat):
sns.kdeplot(
ax=ax, data=train, x=cont_col[i], color="#F8766D", label="Train", fill=True
)
sns.kdeplot(
ax=ax, data=test, x=cont_col[i], color="#00BFC4", label="Test", fill=True
)
ax.set_title(f"{cont_col[i]} distribution")
fig.tight_layout()
plt.legend()
# train_len = train.shape[0]
# y = train[TARGET]
# df = pd.concat([train.drop(TARGET,axis=1),test])
# df2 = pd.get_dummies(df,columns = cat_feat)
# df2.head()
# train = df2.iloc[:train_len,:]
# test = df2.iloc[train_len:,:]
# print(train.shape,test.shape)
# def objective(trial,data=train,target=y):
# train_x, test_x, train_y, test_y = train_test_split(data, target, test_size=0.15,random_state=42)
# param = {
# 'metric': 'rmse',
# 'random_state': 48,
# 'n_estimators': 20000,
# 'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-3, 10.0),
# 'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-3, 10.0),
# 'colsample_bytree': trial.suggest_categorical('colsample_bytree', [0.3,0.4,0.5,0.6,0.7,0.8,0.9, 1.0]),
# 'subsample': trial.suggest_categorical('subsample', [0.4,0.5,0.6,0.7,0.8,1.0]),
# 'learning_rate': trial.suggest_categorical('learning_rate', [0.006,0.008,0.01,0.014,0.017,0.02]),
# 'max_depth': trial.suggest_categorical('max_depth', [10,20,100]),
# 'num_leaves' : trial.suggest_int('num_leaves', 1, 1000),
# 'min_child_samples': trial.suggest_int('min_child_samples', 1, 300),
# 'cat_smooth' : trial.suggest_int('min_data_per_groups', 1, 100)
# }
# model = LGBMRegressor(**param)
# model.fit(train_x,train_y,eval_set=[(test_x,test_y)],early_stopping_rounds=100,verbose=False)
# preds = model.predict(test_x)
# mae = mean_absolute_error(test_y, preds)
# return mae
# study = optuna.create_study(direction='minimize')
# study.optimize(objective, n_trials=100)
# print('Number of finished trials:', len(study.trials))
# print('Best trial:', study.best_trial.params)
# Best_trial = study.best_trial.params
# Best_trial["n_estimators"], Best_trial["tree_method"] = 10000, 'gpu_hist'
# Best_trial
y = train[TARGET]
train.drop(TARGET, axis=1, inplace=True)
xgb_param = {
"lambda": 0.07630413570126751,
"alpha": 0.3487451222774151,
"colsample_bytree": 1.0,
"subsample": 0.4,
"learning_rate": 0.014,
"max_depth": 7,
"random_state": 2023,
"min_child_weight": 92,
"n_estimators": 10000,
"tree_method": "gpu_hist",
}
cat_param = {
"l2_leaf_reg": 0.002343598793171317,
"max_bin": 236,
"learning_rate": 0.009198430945898738,
"max_depth": 15,
"random_state": 2020,
"min_data_in_leaf": 135,
"n_estimators": 10000,
}
lgbm_param = {
"reg_alpha": 0.0015354651359190611,
"reg_lambda": 2.346151892673244,
"colsample_bytree": 1.0,
"subsample": 0.8,
"learning_rate": 0.02,
"max_depth": 10,
"num_leaves": 119,
"min_child_samples": 188,
# 'min_data_per_groups': 7,
"n_estimators": 10000,
# 'tree_method': 'gpu_hist'
}
preds = list()
kf = KFold(n_splits=10, random_state=48, shuffle=True)
for idx, (trn_idx, test_idx) in enumerate(kf.split(train, y)):
X_tr, X_val = train.iloc[trn_idx], train.iloc[test_idx]
y_tr, y_val = y.iloc[trn_idx], y.iloc[test_idx]
model = XGBRegressor(**xgb_param)
model.fit(
X_tr, y_tr, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=False
)
preds.append(model.predict(test))
mae = mean_absolute_error(y_val, model.predict(X_val))
print(f"Fold {idx+1}: MAE: {mae}")
xgb_pred = np.mean(preds, axis=0)
preds = list()
kf = KFold(n_splits=2, random_state=48, shuffle=True)
for idx, (trn_idx, test_idx) in enumerate(kf.split(train, y)):
X_tr, X_val = train.iloc[trn_idx], train.iloc[test_idx]
y_tr, y_val = y.iloc[trn_idx], y.iloc[test_idx]
model = CatBoostRegressor(**cat_param)
model.fit(
X_tr, y_tr, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=False
)
preds.append(model.predict(test))
mae = mean_absolute_error(y_val, model.predict(X_val))
print(f"Fold {idx+1}: MAE: {mae}")
cat_pred = np.mean(preds, axis=0)
preds = list()
kf = KFold(n_splits=10, random_state=48, shuffle=True)
for idx, (trn_idx, test_idx) in enumerate(kf.split(train, y)):
X_tr, X_val = train.iloc[trn_idx], train.iloc[test_idx]
y_tr, y_val = y.iloc[trn_idx], y.iloc[test_idx]
model = LGBMRegressor(**lgbm_param)
model.fit(
X_tr, y_tr, eval_set=[(X_val, y_val)], early_stopping_rounds=100, verbose=False
)
preds.append(model.predict(test))
mae = mean_absolute_error(y_val, model.predict(X_val))
print(f"Fold {idx+1}: MAE: {mae}")
lgbm_pred = np.mean(preds, axis=0)
prediction = (xgb_pred + cat_pred + lgbm_pred) / 3
sub = submission.copy()
sub["yield"] = prediction
sub.head()
sub.to_csv("Output.csv", index=False)
| false | 4 | 4,554 | 0 | 5,290 | 4,554 |
||
129035264
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # **Load Dataset**
train = pd.read_csv("/kaggle/input/titanic/train.csv")
train.head()
test = pd.read_csv("/kaggle/input/titanic/test.csv")
test.head()
# # **Understanding the data**
train.shape
test.shape
train.info() # age has nulls, cabin has too many nulls, embarked has just 2 nulls
test.info() # age has nulls, cabin has too many nulls
# Check duplicates
train.duplicated().sum()
test.duplicated().sum()
women = train.loc[train.Sex == "female"]["Survived"]
women_sur_rate = sum(women) / len(women)
print("% of women who survived:", women_sur_rate)
men = train.loc[train.Sex == "male"]["Survived"]
men_sur_rate = sum(men) / len(men)
print("% of men who survived:", men_sur_rate)
# visualizing missing data
import missingno as msno
msno.matrix(train)
# removing columns that seems unimportant
train.drop(["PassengerId", "Ticket", "Cabin", "Name"], axis=1, inplace=True)
msno.matrix(train)
train.Age.fillna(train.Age.mean(), inplace=True)
msno.matrix(train)
train.dropna(inplace=True)
msno.matrix(train)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/035/129035264.ipynb
| null | null |
[{"Id": 129035264, "ScriptId": 38344913, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1290211, "CreationDate": "05/10/2023 13:20:42", "VersionNumber": 1.0, "Title": "Titanic survival prediction", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 72.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # **Load Dataset**
train = pd.read_csv("/kaggle/input/titanic/train.csv")
train.head()
test = pd.read_csv("/kaggle/input/titanic/test.csv")
test.head()
# # **Understanding the data**
train.shape
test.shape
train.info() # age has nulls, cabin has too many nulls, embarked has just 2 nulls
test.info() # age has nulls, cabin has too many nulls
# Check duplicates
train.duplicated().sum()
test.duplicated().sum()
women = train.loc[train.Sex == "female"]["Survived"]
women_sur_rate = sum(women) / len(women)
print("% of women who survived:", women_sur_rate)
men = train.loc[train.Sex == "male"]["Survived"]
men_sur_rate = sum(men) / len(men)
print("% of men who survived:", men_sur_rate)
# visualizing missing data
import missingno as msno
msno.matrix(train)
# removing columns that seems unimportant
train.drop(["PassengerId", "Ticket", "Cabin", "Name"], axis=1, inplace=True)
msno.matrix(train)
train.Age.fillna(train.Age.mean(), inplace=True)
msno.matrix(train)
train.dropna(inplace=True)
msno.matrix(train)
| false | 0 | 537 | 0 | 537 | 537 |
||
129035406
|
# Refer: https://www.youtube.com/watch?v=_4A9inxGqRM
# Renaming classes.txt in OID toolkit with the required class
# set the file path
file_path = "/content/OIDv4_ToolKit/classes.txt"
# open the file in write mode and overwrite its contents
with open(file_path, mode="w") as f:
f.write("Car")
# print a message to confirm the operation
print(f"File {file_path} has been updated.")
import os
# set the new directory path
new_dir = "/content"
# change the current working directory to the new directory
os.chdir(new_dir)
# print the new working directory
print("New working directory:", os.getcwd())
import os
import cv2
import numpy as np
from tqdm import tqdm
import argparse
import fileinput
# function that turns XMin, YMin, XMax, YMax coordinates to normalized yolo format
def convert(filename_str, coords):
os.chdir("..")
image = cv2.imread(filename_str + ".jpg")
coords[2] -= coords[0]
coords[3] -= coords[1]
x_diff = int(coords[2] / 2)
y_diff = int(coords[3] / 2)
coords[0] = coords[0] + x_diff
coords[1] = coords[1] + y_diff
coords[0] /= int(image.shape[1])
coords[1] /= int(image.shape[0])
coords[2] /= int(image.shape[1])
coords[3] /= int(image.shape[0])
os.chdir("Label")
return coords
ROOT_DIR = os.getcwd()
# create dict to map class names to numbers for yolo
classes = {}
with open("/content/OIDv4_ToolKit/classes.txt", "r") as myFile:
for num, line in enumerate(myFile, 0):
line = line.rstrip("\n")
classes[line] = num
myFile.close()
# step into dataset directory
os.chdir(os.path.join("OID", "Dataset"))
DIRS = os.listdir(os.getcwd())
# for all train, validation and test folders
for DIR in DIRS:
if os.path.isdir(DIR):
os.chdir(DIR)
print("Currently in subdirectory:", DIR)
CLASS_DIRS = os.listdir(os.getcwd())
# for all class folders step into directory to change annotations
for CLASS_DIR in CLASS_DIRS:
if os.path.isdir(CLASS_DIR):
os.chdir(CLASS_DIR)
print("Converting annotations for class: ", CLASS_DIR)
# Step into Label folder where annotations are generated
os.chdir("Label")
for filename in tqdm(os.listdir(os.getcwd())):
filename_str = str.split(filename, ".")[0]
if filename.endswith(".txt"):
annotations = []
with open(filename) as f:
for line in f:
for class_type in classes:
line = line.replace(
class_type, str(classes.get(class_type))
)
labels = line.split()
coords = np.asarray(
[
float(labels[1]),
float(labels[2]),
float(labels[3]),
float(labels[4]),
]
)
coords = convert(filename_str, coords)
labels[1], labels[2], labels[3], labels[4] = (
coords[0],
coords[1],
coords[2],
coords[3],
)
newline = (
str(labels[0])
+ " "
+ str(labels[1])
+ " "
+ str(labels[2])
+ " "
+ str(labels[3])
+ " "
+ str(labels[4])
)
line = line.replace(line, newline)
annotations.append(line)
f.close()
os.chdir("..")
with open(filename, "w") as outfile:
for line in annotations:
outfile.write(line)
outfile.write("\n")
outfile.close()
os.chdir("Label")
os.chdir("..")
os.chdir("..")
os.chdir("..")
import yaml
# Define the dictionary to be saved as YAML
data = {
"path": "/content/OID/Dataset",
"train": "/content/OID/Dataset/train",
"val": "/content/OID/Dataset/test",
"names": {0: "Car"},
}
# Convert the dictionary to YAML format
yaml_data = yaml.dump(data)
# Save the YAML data to a file
with open("/content/google_colab_config.yaml", "w") as f:
f.write(yaml_data)
import os
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8s.yaml") # build a new model from scratch
# Use the model
results = model.train(
data=os.path.join("/content/google_colab_config.yaml"), epochs=20
) # train the model
# Weights are saved to the drive
from google.colab import drive
drive.mount("/content/gdrive")
import os
# set the new directory path
new_dir = "/content"
# change the current working directory to the new directory
os.chdir(new_dir)
# print the new working directory
print("New working directory:", os.getcwd())
import os
import glob
from IPython.display import Image, display
# Define the directory path
dir_path = "/content/gdrive/My Drive/vrp/runs/detect/train"
# Use glob to find all .jpg files in the directory
jpg_files = glob.glob(os.path.join(dir_path, "*.jpg"))
# Loop through the files and display them
for file in jpg_files:
display(Image(filename=file, width=500))
# # Custom video detection
import os
from ultralytics import YOLO
import cv2
VIDEOS_DIR = os.path.join(".", "videos")
video_path = os.path.join("/content/vrp.mp4")
video_path_out = "video_ouput.mp4".format(video_path)
if not os.path.exists(video_path):
raise FileNotFoundError("Video file not found at path: {}".format(video_path))
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
raise Exception("Error opening video file: {}".format(video_path))
ret, frame = cap.read()
if not ret:
raise Exception("Error reading video file: {}".format(video_path))
H, W, _ = frame.shape
out = cv2.VideoWriter(
video_path_out,
cv2.VideoWriter_fourcc(*"MP4V"),
int(cap.get(cv2.CAP_PROP_FPS)),
(W, H),
)
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
# Load a model
model = YOLO(model_path) # load a custom model
threshold = 0.2
class_name_dict = {0: "Vehicle registration plate"}
while ret:
results = model(frame)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
if score > threshold:
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.putText(
frame,
class_name_dict[int(class_id)].upper(),
(int(x1), int(y1 - 10)),
cv2.FONT_HERSHEY_SIMPLEX,
1.3,
(0, 255, 0),
3,
cv2.LINE_AA,
)
out.write(frame)
ret, frame = cap.read()
if not ret:
break
cap.release()
out.release()
cv2.destroyAllWindows()
import os
from ultralytics import YOLO
import cv2
VIDEOS_DIR = os.path.join(".", "videos")
video_path = os.path.join("/content/vrp.mp4")
video_path_out = "video_ouput.mp4".format(video_path)
if not os.path.exists(video_path):
raise FileNotFoundError("Video file not found at path: {}".format(video_path))
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
raise Exception("Error opening video file: {}".format(video_path))
ret, frame = cap.read()
if not ret:
raise Exception("Error reading video file: {}".format(video_path))
H, W, _ = frame.shape
out = cv2.VideoWriter(
video_path_out,
cv2.VideoWriter_fourcc(*"MP4V"),
int(cap.get(cv2.CAP_PROP_FPS)),
(W, H),
)
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
# Load a model
model = YOLO(model_path) # load a custom model
threshold = 0.2
class_name_dict = {0: "Vehicle registration plate"}
while ret:
results = model(frame)[0]
# Perform non-maximum suppression to remove duplicate bounding boxes
nms_results = results.xyxy[results[:, 4] > threshold].nms()
for result in nms_results.tolist():
x1, y1, x2, y2, score, class_id = result
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.putText(
frame,
class_name_dict[int(class_id)].upper(),
(int(x1), int(y1 - 10)),
cv2.FONT_HERSHEY_SIMPLEX,
1.3,
(0, 255, 0),
3,
cv2.LINE_AA,
)
out.write(frame)
ret, frame = cap.read()
if not ret:
break
cap.release()
out.release()
cv2.destroyAllWindows()
# # Custom Image Detection
import os
from ultralytics import YOLO
import cv2
from google.colab.patches import cv2_imshow
image_path = "/content/OID/Dataset/train/Car/3655347acded111c.jpg"
image_out_path = "test_out.jpg"
if not os.path.exists(image_path):
raise FileNotFoundError("Image file not found at path: {}".format(image_path))
img = cv2.imread(image_path)
# Load a model
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
model = YOLO(model_path)
threshold = 0.2
class_name_dict = {0: "Car"}
results = model(img)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
print(score)
if score > threshold:
cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.putText(
img,
class_name_dict[int(class_id)].upper(),
(int(x1), int(y1 - 10)),
cv2.FONT_HERSHEY_SIMPLEX,
1.3,
(0, 255, 0),
3,
cv2.LINE_AA,
)
cv2.imwrite(image_out_path, img)
cv2_imshow(img)
import locale
locale.getpreferredencoding = lambda: "UTF-8"
from google.colab import drive
drive.mount("/content/drive")
import os
import pytesseract
from PIL import Image
from ultralytics import YOLO
import cv2
from google.colab.patches import cv2_imshow
image_path = "/content/OID/Dataset/test/Vehicle registration plate/0cdae528456599ed.jpg"
image_out_path = "test_out.jpg"
plate_out_path = "plate_number.jpg"
if not os.path.exists(image_path):
raise FileNotFoundError("Image file not found at path: {}".format(image_path))
img = cv2.imread(image_path)
# Load a model
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
model = YOLO(model_path)
threshold = 0.5
class_name_dict = {0: "Vehicle registration plate"}
results = model(img)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
if (
score > threshold
and class_name_dict[int(class_id)] == "Vehicle registration plate"
):
cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.imwrite(plate_out_path, img[int(y1) : int(y2), int(x1) : int(x2)])
cv2.imwrite(image_out_path, img)
cv2_imshow(img)
# OCR to read number plate
plate_img = Image.open(plate_out_path)
plate_text = pytesseract.image_to_string(plate_img, config="--psm 7")
print("Number Plate: ", plate_text)
import os
import pytesseract
from PIL import Image
from ultralytics import YOLO
import cv2
from google.colab.patches import cv2_imshow
image_path = "/content/White-2BPlate.jpeg"
image_out_path = "test_out.jpg"
plate_out_path = "plate_number.jpg"
if not os.path.exists(image_path):
raise FileNotFoundError("Image file not found at path: {}".format(image_path))
img = cv2.imread(image_path)
# Load a model
model_path = "/content/drive/MyDrive/vrp/detect/train/weights/best.pt"
model = YOLO(model_path)
threshold = 0.5
class_name_dict = {0: "Vehicle registration plate"}
results = model(img)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
if (
score > threshold
and class_name_dict[int(class_id)] == "Vehicle registration plate"
):
cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.imwrite(plate_out_path, img[int(y1) : int(y2), int(x1) : int(x2)])
# OCR to read number plate
plate_img = Image.open(plate_out_path)
plate_text = pytesseract.image_to_string(plate_img, config="--psm 7")
cv2.putText(
img,
plate_text,
(int(x1), int(y1 - 20)),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(0, 0, 255),
2,
cv2.LINE_AA,
)
cv2.imwrite(image_out_path, img)
cv2_imshow(img)
print(plate_text)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/035/129035406.ipynb
| null | null |
[{"Id": 129035406, "ScriptId": 38357873, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9404738, "CreationDate": "05/10/2023 13:21:47", "VersionNumber": 1.0, "Title": "Open Images Dataset V6 + YOLO v8", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 441.0, "LinesInsertedFromPrevious": 441.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# Refer: https://www.youtube.com/watch?v=_4A9inxGqRM
# Renaming classes.txt in OID toolkit with the required class
# set the file path
file_path = "/content/OIDv4_ToolKit/classes.txt"
# open the file in write mode and overwrite its contents
with open(file_path, mode="w") as f:
f.write("Car")
# print a message to confirm the operation
print(f"File {file_path} has been updated.")
import os
# set the new directory path
new_dir = "/content"
# change the current working directory to the new directory
os.chdir(new_dir)
# print the new working directory
print("New working directory:", os.getcwd())
import os
import cv2
import numpy as np
from tqdm import tqdm
import argparse
import fileinput
# function that turns XMin, YMin, XMax, YMax coordinates to normalized yolo format
def convert(filename_str, coords):
os.chdir("..")
image = cv2.imread(filename_str + ".jpg")
coords[2] -= coords[0]
coords[3] -= coords[1]
x_diff = int(coords[2] / 2)
y_diff = int(coords[3] / 2)
coords[0] = coords[0] + x_diff
coords[1] = coords[1] + y_diff
coords[0] /= int(image.shape[1])
coords[1] /= int(image.shape[0])
coords[2] /= int(image.shape[1])
coords[3] /= int(image.shape[0])
os.chdir("Label")
return coords
ROOT_DIR = os.getcwd()
# create dict to map class names to numbers for yolo
classes = {}
with open("/content/OIDv4_ToolKit/classes.txt", "r") as myFile:
for num, line in enumerate(myFile, 0):
line = line.rstrip("\n")
classes[line] = num
myFile.close()
# step into dataset directory
os.chdir(os.path.join("OID", "Dataset"))
DIRS = os.listdir(os.getcwd())
# for all train, validation and test folders
for DIR in DIRS:
if os.path.isdir(DIR):
os.chdir(DIR)
print("Currently in subdirectory:", DIR)
CLASS_DIRS = os.listdir(os.getcwd())
# for all class folders step into directory to change annotations
for CLASS_DIR in CLASS_DIRS:
if os.path.isdir(CLASS_DIR):
os.chdir(CLASS_DIR)
print("Converting annotations for class: ", CLASS_DIR)
# Step into Label folder where annotations are generated
os.chdir("Label")
for filename in tqdm(os.listdir(os.getcwd())):
filename_str = str.split(filename, ".")[0]
if filename.endswith(".txt"):
annotations = []
with open(filename) as f:
for line in f:
for class_type in classes:
line = line.replace(
class_type, str(classes.get(class_type))
)
labels = line.split()
coords = np.asarray(
[
float(labels[1]),
float(labels[2]),
float(labels[3]),
float(labels[4]),
]
)
coords = convert(filename_str, coords)
labels[1], labels[2], labels[3], labels[4] = (
coords[0],
coords[1],
coords[2],
coords[3],
)
newline = (
str(labels[0])
+ " "
+ str(labels[1])
+ " "
+ str(labels[2])
+ " "
+ str(labels[3])
+ " "
+ str(labels[4])
)
line = line.replace(line, newline)
annotations.append(line)
f.close()
os.chdir("..")
with open(filename, "w") as outfile:
for line in annotations:
outfile.write(line)
outfile.write("\n")
outfile.close()
os.chdir("Label")
os.chdir("..")
os.chdir("..")
os.chdir("..")
import yaml
# Define the dictionary to be saved as YAML
data = {
"path": "/content/OID/Dataset",
"train": "/content/OID/Dataset/train",
"val": "/content/OID/Dataset/test",
"names": {0: "Car"},
}
# Convert the dictionary to YAML format
yaml_data = yaml.dump(data)
# Save the YAML data to a file
with open("/content/google_colab_config.yaml", "w") as f:
f.write(yaml_data)
import os
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8s.yaml") # build a new model from scratch
# Use the model
results = model.train(
data=os.path.join("/content/google_colab_config.yaml"), epochs=20
) # train the model
# Weights are saved to the drive
from google.colab import drive
drive.mount("/content/gdrive")
import os
# set the new directory path
new_dir = "/content"
# change the current working directory to the new directory
os.chdir(new_dir)
# print the new working directory
print("New working directory:", os.getcwd())
import os
import glob
from IPython.display import Image, display
# Define the directory path
dir_path = "/content/gdrive/My Drive/vrp/runs/detect/train"
# Use glob to find all .jpg files in the directory
jpg_files = glob.glob(os.path.join(dir_path, "*.jpg"))
# Loop through the files and display them
for file in jpg_files:
display(Image(filename=file, width=500))
# # Custom video detection
import os
from ultralytics import YOLO
import cv2
VIDEOS_DIR = os.path.join(".", "videos")
video_path = os.path.join("/content/vrp.mp4")
video_path_out = "video_ouput.mp4".format(video_path)
if not os.path.exists(video_path):
raise FileNotFoundError("Video file not found at path: {}".format(video_path))
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
raise Exception("Error opening video file: {}".format(video_path))
ret, frame = cap.read()
if not ret:
raise Exception("Error reading video file: {}".format(video_path))
H, W, _ = frame.shape
out = cv2.VideoWriter(
video_path_out,
cv2.VideoWriter_fourcc(*"MP4V"),
int(cap.get(cv2.CAP_PROP_FPS)),
(W, H),
)
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
# Load a model
model = YOLO(model_path) # load a custom model
threshold = 0.2
class_name_dict = {0: "Vehicle registration plate"}
while ret:
results = model(frame)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
if score > threshold:
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.putText(
frame,
class_name_dict[int(class_id)].upper(),
(int(x1), int(y1 - 10)),
cv2.FONT_HERSHEY_SIMPLEX,
1.3,
(0, 255, 0),
3,
cv2.LINE_AA,
)
out.write(frame)
ret, frame = cap.read()
if not ret:
break
cap.release()
out.release()
cv2.destroyAllWindows()
import os
from ultralytics import YOLO
import cv2
VIDEOS_DIR = os.path.join(".", "videos")
video_path = os.path.join("/content/vrp.mp4")
video_path_out = "video_ouput.mp4".format(video_path)
if not os.path.exists(video_path):
raise FileNotFoundError("Video file not found at path: {}".format(video_path))
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
raise Exception("Error opening video file: {}".format(video_path))
ret, frame = cap.read()
if not ret:
raise Exception("Error reading video file: {}".format(video_path))
H, W, _ = frame.shape
out = cv2.VideoWriter(
video_path_out,
cv2.VideoWriter_fourcc(*"MP4V"),
int(cap.get(cv2.CAP_PROP_FPS)),
(W, H),
)
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
# Load a model
model = YOLO(model_path) # load a custom model
threshold = 0.2
class_name_dict = {0: "Vehicle registration plate"}
while ret:
results = model(frame)[0]
# Perform non-maximum suppression to remove duplicate bounding boxes
nms_results = results.xyxy[results[:, 4] > threshold].nms()
for result in nms_results.tolist():
x1, y1, x2, y2, score, class_id = result
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.putText(
frame,
class_name_dict[int(class_id)].upper(),
(int(x1), int(y1 - 10)),
cv2.FONT_HERSHEY_SIMPLEX,
1.3,
(0, 255, 0),
3,
cv2.LINE_AA,
)
out.write(frame)
ret, frame = cap.read()
if not ret:
break
cap.release()
out.release()
cv2.destroyAllWindows()
# # Custom Image Detection
import os
from ultralytics import YOLO
import cv2
from google.colab.patches import cv2_imshow
image_path = "/content/OID/Dataset/train/Car/3655347acded111c.jpg"
image_out_path = "test_out.jpg"
if not os.path.exists(image_path):
raise FileNotFoundError("Image file not found at path: {}".format(image_path))
img = cv2.imread(image_path)
# Load a model
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
model = YOLO(model_path)
threshold = 0.2
class_name_dict = {0: "Car"}
results = model(img)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
print(score)
if score > threshold:
cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.putText(
img,
class_name_dict[int(class_id)].upper(),
(int(x1), int(y1 - 10)),
cv2.FONT_HERSHEY_SIMPLEX,
1.3,
(0, 255, 0),
3,
cv2.LINE_AA,
)
cv2.imwrite(image_out_path, img)
cv2_imshow(img)
import locale
locale.getpreferredencoding = lambda: "UTF-8"
from google.colab import drive
drive.mount("/content/drive")
import os
import pytesseract
from PIL import Image
from ultralytics import YOLO
import cv2
from google.colab.patches import cv2_imshow
image_path = "/content/OID/Dataset/test/Vehicle registration plate/0cdae528456599ed.jpg"
image_out_path = "test_out.jpg"
plate_out_path = "plate_number.jpg"
if not os.path.exists(image_path):
raise FileNotFoundError("Image file not found at path: {}".format(image_path))
img = cv2.imread(image_path)
# Load a model
model_path = "/content/OID/Dataset/runs/detect/train/weights/best.pt"
model = YOLO(model_path)
threshold = 0.5
class_name_dict = {0: "Vehicle registration plate"}
results = model(img)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
if (
score > threshold
and class_name_dict[int(class_id)] == "Vehicle registration plate"
):
cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.imwrite(plate_out_path, img[int(y1) : int(y2), int(x1) : int(x2)])
cv2.imwrite(image_out_path, img)
cv2_imshow(img)
# OCR to read number plate
plate_img = Image.open(plate_out_path)
plate_text = pytesseract.image_to_string(plate_img, config="--psm 7")
print("Number Plate: ", plate_text)
import os
import pytesseract
from PIL import Image
from ultralytics import YOLO
import cv2
from google.colab.patches import cv2_imshow
image_path = "/content/White-2BPlate.jpeg"
image_out_path = "test_out.jpg"
plate_out_path = "plate_number.jpg"
if not os.path.exists(image_path):
raise FileNotFoundError("Image file not found at path: {}".format(image_path))
img = cv2.imread(image_path)
# Load a model
model_path = "/content/drive/MyDrive/vrp/detect/train/weights/best.pt"
model = YOLO(model_path)
threshold = 0.5
class_name_dict = {0: "Vehicle registration plate"}
results = model(img)[0]
for result in results.boxes.data.tolist():
x1, y1, x2, y2, score, class_id = result
if (
score > threshold
and class_name_dict[int(class_id)] == "Vehicle registration plate"
):
cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
cv2.imwrite(plate_out_path, img[int(y1) : int(y2), int(x1) : int(x2)])
# OCR to read number plate
plate_img = Image.open(plate_out_path)
plate_text = pytesseract.image_to_string(plate_img, config="--psm 7")
cv2.putText(
img,
plate_text,
(int(x1), int(y1 - 20)),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(0, 0, 255),
2,
cv2.LINE_AA,
)
cv2.imwrite(image_out_path, img)
cv2_imshow(img)
print(plate_text)
| false | 0 | 3,827 | 0 | 3,827 | 3,827 |
||
129692510
|
<jupyter_start><jupyter_text>Student Study Hours
The data set contains two columns. that is the number of hours student studied and the marks they got. we can apply simple linear regression to predict the marks of the student given their number of study hours.
Kaggle dataset identifier: student-study-hours
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# ## Read the data set
df = pd.read_csv("/kaggle/input/student-study-hours/score.csv")
df.head()
# ### description and the info of the data set.
df.describe()
df.info()
# ## Check if the Data set is linear regression
from matplotlib import pyplot as plt
x = df[["Hours"]].values
y = df["Scores"].values
plt.scatter(x, y, color="red")
plt.xlabel("Study Hours")
plt.ylabel("Scores")
plt.show()
# ### we can see it is linear regression
# ## Train test split
from sklearn.model_selection import train_test_split
# ### split the value to test value and train value
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size=0.8, random_state=200
)
plt.scatter(x_train, y_train, label="traning data", color="red")
plt.scatter(x_test, y_test, label="test data", color="green")
plt.xlabel("Study Hours")
plt.ylabel("Scores")
plt.legend()
plt.show()
# ## Linear Model
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x_train, y_train)
# ## regression model’s score.
lr.score(x_test, y_test) * 100
# ## Predict the Test value
y_predict = lr.predict(x_test)
plt.plot(x_test, y_predict, label="linear regression", color="green")
plt.scatter(x_test, y_test, label="test data", color="yellow")
plt.scatter(x_train, y_train, label="Actual train data", color="red")
plt.xlabel("Study Hours")
plt.ylabel("Scores")
plt.legend()
plt.show()
# ## Coefficient Sign:
# In a simple linear regression model, the coefficient associated with the independent variable (Hours of Study) indicates the change in the dependent variable
lr.coef_[0]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692510.ipynb
|
student-study-hours
|
himanshunakrani
|
[{"Id": 129692510, "ScriptId": 38546168, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14998089, "CreationDate": "05/15/2023 19:42:58", "VersionNumber": 1.0, "Title": "Linear Regression", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 67.0, "LinesInsertedFromPrevious": 67.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 186018647, "KernelVersionId": 129692510, "SourceDatasetVersionId": 3965497}]
|
[{"Id": 3965497, "DatasetId": 2353567, "DatasourceVersionId": 4021072, "CreatorUserId": 7943242, "LicenseName": "CC0: Public Domain", "CreationDate": "07/20/2022 13:17:29", "VersionNumber": 1.0, "Title": "Student Study Hours", "Slug": "student-study-hours", "Subtitle": "Number of hours student studied and Marks they got.", "Description": "The data set contains two columns. that is the number of hours student studied and the marks they got. we can apply simple linear regression to predict the marks of the student given their number of study hours.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2353567, "CreatorUserId": 7943242, "OwnerUserId": 7943242.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 6200064.0, "CurrentDatasourceVersionId": 6279333.0, "ForumId": 2380596, "Type": 2, "CreationDate": "07/20/2022 13:17:29", "LastActivityDate": "07/20/2022", "TotalViews": 58199, "TotalDownloads": 10440, "TotalVotes": 180, "TotalKernels": 61}]
|
[{"Id": 7943242, "UserName": "himanshunakrani", "DisplayName": "Himanshu Nakrani", "RegisterDate": "07/20/2021", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# ## Read the data set
df = pd.read_csv("/kaggle/input/student-study-hours/score.csv")
df.head()
# ### description and the info of the data set.
df.describe()
df.info()
# ## Check if the Data set is linear regression
from matplotlib import pyplot as plt
x = df[["Hours"]].values
y = df["Scores"].values
plt.scatter(x, y, color="red")
plt.xlabel("Study Hours")
plt.ylabel("Scores")
plt.show()
# ### we can see it is linear regression
# ## Train test split
from sklearn.model_selection import train_test_split
# ### split the value to test value and train value
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size=0.8, random_state=200
)
plt.scatter(x_train, y_train, label="traning data", color="red")
plt.scatter(x_test, y_test, label="test data", color="green")
plt.xlabel("Study Hours")
plt.ylabel("Scores")
plt.legend()
plt.show()
# ## Linear Model
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x_train, y_train)
# ## regression model’s score.
lr.score(x_test, y_test) * 100
# ## Predict the Test value
y_predict = lr.predict(x_test)
plt.plot(x_test, y_predict, label="linear regression", color="green")
plt.scatter(x_test, y_test, label="test data", color="yellow")
plt.scatter(x_train, y_train, label="Actual train data", color="red")
plt.xlabel("Study Hours")
plt.ylabel("Scores")
plt.legend()
plt.show()
# ## Coefficient Sign:
# In a simple linear regression model, the coefficient associated with the independent variable (Hours of Study) indicates the change in the dependent variable
lr.coef_[0]
| false | 1 | 548 | 1 | 615 | 548 |
||
129692149
|
# importing libs
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
# load data and split it
data = load_iris()
x_train, x_test, y_train, y_test = train_test_split(
data.data, data.target, test_size=0.3
)
# creating and learning ML model
model = SVC()
model.fit(x_train, y_train)
# testing model accuracy score
accuracy_score(y_test, model.predict(x_test))
# do prediction
data.target_names[model.predict([[1, 1, 1, 1]])][0]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692149.ipynb
| null | null |
[{"Id": 129692149, "ScriptId": 38556256, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15089407, "CreationDate": "05/15/2023 19:39:01", "VersionNumber": 1.0, "Title": "scikit learn iris classification", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 20.0, "LinesInsertedFromPrevious": 20.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# importing libs
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
# load data and split it
data = load_iris()
x_train, x_test, y_train, y_test = train_test_split(
data.data, data.target, test_size=0.3
)
# creating and learning ML model
model = SVC()
model.fit(x_train, y_train)
# testing model accuracy score
accuracy_score(y_test, model.predict(x_test))
# do prediction
data.target_names[model.predict([[1, 1, 1, 1]])][0]
| false | 0 | 177 | 0 | 177 | 177 |
||
129692404
|
<jupyter_start><jupyter_text>Customer Personality Analysis
### Context
**Problem Statement**
Customer Personality Analysis is a detailed analysis of a company’s ideal customers. It helps a business to better understand its customers and makes it easier for them to modify products according to the specific needs, behaviors and concerns of different types of customers.
Customer personality analysis helps a business to modify its product based on its target customers from different types of customer segments. For example, instead of spending money to market a new product to every customer in the company’s database, a company can analyze which customer segment is most likely to buy the product and then market the product only on that particular segment.
### Content
**Attributes**
**People**
* ID: Customer's unique identifier
* Year_Birth: Customer's birth year
* Education: Customer's education level
* Marital_Status: Customer's marital status
* Income: Customer's yearly household income
* Kidhome: Number of children in customer's household
* Teenhome: Number of teenagers in customer's household
* Dt_Customer: Date of customer's enrollment with the company
* Recency: Number of days since customer's last purchase
* Complain: 1 if the customer complained in the last 2 years, 0 otherwise
**Products**
* MntWines: Amount spent on wine in last 2 years
* MntFruits: Amount spent on fruits in last 2 years
* MntMeatProducts: Amount spent on meat in last 2 years
* MntFishProducts: Amount spent on fish in last 2 years
* MntSweetProducts: Amount spent on sweets in last 2 years
* MntGoldProds: Amount spent on gold in last 2 years
**Promotion**
* NumDealsPurchases: Number of purchases made with a discount
* AcceptedCmp1: 1 if customer accepted the offer in the 1st campaign, 0 otherwise
* AcceptedCmp2: 1 if customer accepted the offer in the 2nd campaign, 0 otherwise
* AcceptedCmp3: 1 if customer accepted the offer in the 3rd campaign, 0 otherwise
* AcceptedCmp4: 1 if customer accepted the offer in the 4th campaign, 0 otherwise
* AcceptedCmp5: 1 if customer accepted the offer in the 5th campaign, 0 otherwise
* Response: 1 if customer accepted the offer in the last campaign, 0 otherwise
**Place**
* NumWebPurchases: Number of purchases made through the company’s website
* NumCatalogPurchases: Number of purchases made using a catalogue
* NumStorePurchases: Number of purchases made directly in stores
* NumWebVisitsMonth: Number of visits to company’s website in the last month
### Target
Need to perform clustering to summarize customer segments.
### Acknowledgement
The dataset for this project is provided by Dr. Omar Romero-Hernandez.
### Solution
You can take help from following link to know more about the approach to solve this problem.
[Visit this URL ](https://thecleverprogrammer.com/2021/02/08/customer-personality-analysis-with-python/)
### Inspiration
happy learning....
**Hope you like this dataset please don't forget to like this dataset**
Kaggle dataset identifier: customer-personality-analysis
<jupyter_code>import pandas as pd
df = pd.read_csv('customer-personality-analysis/marketing_campaign.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 2240 entries, 0 to 2239
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID Year_Birth Education Marital_Status Income Kidhome Teenhome Dt_Customer Recency MntWines MntFruits MntMeatProducts MntFishProducts MntSweetProducts MntGoldProds NumDealsPurchases NumWebPurchases NumCatalogPurchases NumStorePurchases NumWebVisitsMonth AcceptedCmp3 AcceptedCmp4 AcceptedCmp5 AcceptedCmp1 AcceptedCmp2 Complain Z_CostContact Z_Revenue Response 2240 non-null object
dtypes: object(1)
memory usage: 17.6+ KB
<jupyter_text>Examples:
{
"ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAccepte...(truncated)",
}
{
"ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAccepte...(truncated)",
}
{
"ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAccepte...(truncated)",
}
{
"ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAccepte...(truncated)",
}
<jupyter_script># **Loading and inpecting**
#
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from yellowbrick.cluster import KElbowVisualizer
from sklearn.cluster import KMeans
import warnings
warnings.filterwarnings("ignore")
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# load the dataset
data = pd.read_csv(
"../input/customer-personality-analysis/marketing_campaign.csv", sep="\t"
)
data.head()
data.info()
data.isnull().sum()
data.describe().T
# finding coorelation between columns..
plt.figure(figsize=(20, 10))
cmap = sns.diverging_palette(240, 10, as_cmap=True)
mask = np.triu(np.ones_like(data.corr()))
corr = sns.heatmap(
data.corr(), fmt=".2f", vmin=-1, vmax=1, annot=True, cmap=cmap, mask=mask
)
corr.set_title("Correlation Heatmap", fontdict={"fontsize": 18}, pad=5)
cmap = sns.diverging_palette(230, 20, as_cmap=True)
data.isnull().sum() / data.shape[0] * 100
# 1% from income is null value
# filling the null values with mean..
data["Income"] = data["Income"].fillna(data["Income"].mean())
data.isnull().sum()
data.duplicated().sum()
# **No Duplicated values**
# ____
data.info()
# convert the columns of DT_Customer to date type..
data["Dt_Customer"] = pd.to_datetime(data["Dt_Customer"])
# to know the last day..
data["Dt_Customer"].max()
data["last_day"] = pd.to_datetime("2014-12-06")
data["No_Days"] = (data["last_day"] - data["Dt_Customer"]).dt.days
data["No_Days"].max()
# make columns of age ..
data["age"] = 2023 - data["Year_Birth"]
pd.set_option("display.max_columns", None)
data.head(10)
# checking the ouliers in age and income columns.
plt.figure(figsize=(15, 8))
plt.subplot(1, 2, 1)
plt.xlabel = "income"
sns.boxplot(data=data, x="Income", color="brown")
plt.subplot(1, 2, 2)
plt.xlabel = "age"
sns.boxplot(data=data, x="age", color="steelblue")
# ****so there is outliers****
# delete the outliers..
# from age column
data = data[data["age"] < 80]
# from income column
data = data[data["Income"] < 150000]
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.xlabel = "income"
sns.boxplot(data=data, x="Income", color="brown")
plt.subplot(1, 2, 2)
plt.xlabel = "age"
sns.boxplot(data=data, x="age")
data["Marital_Status"].value_counts()
# handling Marital_Status column
data["relationship"] = data["Marital_Status"].replace(
{
"Married": "in_relationship",
"Together": "in_relationship",
"Single": "single",
"Divorced": "single",
"YOLO": "single",
"Absurd": "single",
"Widow": "single",
"Alone": "single",
}
)
data.columns
data["members_home"] = (
data["Kidhome"]
+ data["Teenhome"]
+ data["relationship"].replace({"single": 0, "in_relationship": 1})
)
data["AcceptedCmp"] = data["AcceptedCmp1"] + data["AcceptedCmp2"] + data["AcceptedCmp3"]
+data["AcceptedCmp4"] + data["AcceptedCmp5"] + data["Response"]
data["num_purchases"] = (
data["NumWebPurchases"]
+ data["NumCatalogPurchases"]
+ data["NumStorePurchases"]
+ data["NumDealsPurchases"]
)
data["expenses"] = data["MntWines"] + data["MntFruits"] + data["MntMeatProducts"]
+data["MntFishProducts"] + data["MntSweetProducts"] + data["MntGoldProds"]
# dropping unnecessary columns
data.drop(
labels=[
"Marital_Status",
"ID",
"last_day",
"Year_Birth",
"Dt_Customer",
"last_day",
"Kidhome",
"Teenhome",
"MntWines",
"MntFruits",
"MntMeatProducts",
"MntFishProducts",
"MntSweetProducts",
"MntGoldProds",
"NumDealsPurchases",
"NumWebPurchases",
"NumCatalogPurchases",
"NumStorePurchases",
"NumWebVisitsMonth",
"AcceptedCmp3",
"AcceptedCmp4",
"AcceptedCmp5",
"AcceptedCmp1",
"AcceptedCmp2",
"Z_CostContact",
"Z_Revenue",
"Recency",
"Complain",
],
axis=1,
inplace=True,
)
data.columns
# make some plots
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 1)
sns.histplot(data, x="age", color="darkred")
plt.subplot(1, 2, 2)
sns.histplot(data, x="Income", color="steelblue")
plt.figure(figsize=(7, 7))
sns.countplot(data, x="relationship")
# parplot of eduction
plt.plot(figsize=(10, 10))
data.Education.value_counts().plot(kind="pie")
plt.show()
# numbers of members in family.
plt.plot(figsize=(10, 10))
data.members_home.value_counts().plot(kind="pie")
plt.show()
data.columns
# # preprocess the data
data.info()
# convert education and relationship to num values..
data["Education"] = preprocessing.LabelEncoder().fit_transform(data["Education"])
data["relationship"] = preprocessing.LabelEncoder().fit_transform(data["relationship"])
# education after converting
data.head()
scaler = StandardScaler()
scaled_features = scaler.fit_transform(data.values)
scaled_data = pd.DataFrame(scaled_features, index=data.index, columns=data.columns)
# reduce features of the data to 4 ..
pca = PCA(n_components=4)
data_pca = pca.fit_transform(scaled_data)
data_pca.shape
# # clustering Time
plt.figure(figsize=(12, 8))
elbow_graph = KElbowVisualizer(KMeans(random_state=123), k=10)
elbow_graph.fit(data_pca)
elbow_graph.show()
# number of clusters is ***`5`***
kmeans = KMeans(n_clusters=5)
cluster = kmeans.fit_predict(data_pca)
data["cluster"] = cluster
cluster.min(), cluster.max()
cluster
# ploting cluster...
plt.figure(figsize=(15, 12))
sns.scatterplot(x=data_pca[:, 0], y=data_pca[:, 1], hue=cluster, s=60, palette="Set1")
plt.title("clusters in data")
# **make some plots and identify the spending `capabilities` and `income` for each cluster and comment them blow**
pl = sns.scatterplot(
data=data, x=data["expenses"], y=data["Income"], hue=data["cluster"]
)
pl.set_title("spending and icome of ecah cluster")
plt.legend()
plt.show()
sns.countplot(x=data["cluster"])
sns.histplot(x=data["cluster"], y=data["Income"])
sns.histplot(x=data["cluster"], y=data["expenses"])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692404.ipynb
|
customer-personality-analysis
|
imakash3011
|
[{"Id": 129692404, "ScriptId": 38520685, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10012785, "CreationDate": "05/15/2023 19:42:00", "VersionNumber": 1.0, "Title": "Clustering K-means", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 209.0, "LinesInsertedFromPrevious": 209.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 186018441, "KernelVersionId": 129692404, "SourceDatasetVersionId": 2549419}]
|
[{"Id": 2549419, "DatasetId": 1546318, "DatasourceVersionId": 2592425, "CreatorUserId": 5270083, "LicenseName": "CC0: Public Domain", "CreationDate": "08/22/2021 18:15:05", "VersionNumber": 1.0, "Title": "Customer Personality Analysis", "Slug": "customer-personality-analysis", "Subtitle": "Analysis of company's ideal customers", "Description": "### Context\n\n**Problem Statement**\n\nCustomer Personality Analysis is a detailed analysis of a company\u2019s ideal customers. It helps a business to better understand its customers and makes it easier for them to modify products according to the specific needs, behaviors and concerns of different types of customers. \n\nCustomer personality analysis helps a business to modify its product based on its target customers from different types of customer segments. For example, instead of spending money to market a new product to every customer in the company\u2019s database, a company can analyze which customer segment is most likely to buy the product and then market the product only on that particular segment.\n\n\n### Content\n\n**Attributes**\n\n**People**\n\n* ID: Customer's unique identifier\n* Year_Birth: Customer's birth year\n* Education: Customer's education level\n* Marital_Status: Customer's marital status\n* Income: Customer's yearly household income\n* Kidhome: Number of children in customer's household\n* Teenhome: Number of teenagers in customer's household\n* Dt_Customer: Date of customer's enrollment with the company\n* Recency: Number of days since customer's last purchase\n* Complain: 1 if the customer complained in the last 2 years, 0 otherwise\n\n**Products**\n\n* MntWines: Amount spent on wine in last 2 years\n* MntFruits: Amount spent on fruits in last 2 years\n* MntMeatProducts: Amount spent on meat in last 2 years\n* MntFishProducts: Amount spent on fish in last 2 years\n* MntSweetProducts: Amount spent on sweets in last 2 years\n* MntGoldProds: Amount spent on gold in last 2 years\n\n**Promotion**\n\n* NumDealsPurchases: Number of purchases made with a discount\n* AcceptedCmp1: 1 if customer accepted the offer in the 1st campaign, 0 otherwise\n* AcceptedCmp2: 1 if customer accepted the offer in the 2nd campaign, 0 otherwise\n* AcceptedCmp3: 1 if customer accepted the offer in the 3rd campaign, 0 otherwise\n* AcceptedCmp4: 1 if customer accepted the offer in the 4th campaign, 0 otherwise\n* AcceptedCmp5: 1 if customer accepted the offer in the 5th campaign, 0 otherwise\n* Response: 1 if customer accepted the offer in the last campaign, 0 otherwise\n\n**Place**\n\n* NumWebPurchases: Number of purchases made through the company\u2019s website\n* NumCatalogPurchases: Number of purchases made using a catalogue\n* NumStorePurchases: Number of purchases made directly in stores\n* NumWebVisitsMonth: Number of visits to company\u2019s website in the last month\n\n### Target\n\nNeed to perform clustering to summarize customer segments.\n\n### Acknowledgement\n\nThe dataset for this project is provided by Dr. Omar Romero-Hernandez. \n\n### Solution \n\nYou can take help from following link to know more about the approach to solve this problem.\n[Visit this URL ](https://thecleverprogrammer.com/2021/02/08/customer-personality-analysis-with-python/)\n\n\n### Inspiration\n\nhappy learning....\n\n**Hope you like this dataset please don't forget to like this dataset**", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1546318, "CreatorUserId": 5270083, "OwnerUserId": 5270083.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2549419.0, "CurrentDatasourceVersionId": 2592425.0, "ForumId": 1566216, "Type": 2, "CreationDate": "08/22/2021 18:15:05", "LastActivityDate": "08/22/2021", "TotalViews": 653868, "TotalDownloads": 103484, "TotalVotes": 2125, "TotalKernels": 358}]
|
[{"Id": 5270083, "UserName": "imakash3011", "DisplayName": "Akash Patel", "RegisterDate": "06/09/2020", "PerformanceTier": 3}]
|
# **Loading and inpecting**
#
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from yellowbrick.cluster import KElbowVisualizer
from sklearn.cluster import KMeans
import warnings
warnings.filterwarnings("ignore")
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# load the dataset
data = pd.read_csv(
"../input/customer-personality-analysis/marketing_campaign.csv", sep="\t"
)
data.head()
data.info()
data.isnull().sum()
data.describe().T
# finding coorelation between columns..
plt.figure(figsize=(20, 10))
cmap = sns.diverging_palette(240, 10, as_cmap=True)
mask = np.triu(np.ones_like(data.corr()))
corr = sns.heatmap(
data.corr(), fmt=".2f", vmin=-1, vmax=1, annot=True, cmap=cmap, mask=mask
)
corr.set_title("Correlation Heatmap", fontdict={"fontsize": 18}, pad=5)
cmap = sns.diverging_palette(230, 20, as_cmap=True)
data.isnull().sum() / data.shape[0] * 100
# 1% from income is null value
# filling the null values with mean..
data["Income"] = data["Income"].fillna(data["Income"].mean())
data.isnull().sum()
data.duplicated().sum()
# **No Duplicated values**
# ____
data.info()
# convert the columns of DT_Customer to date type..
data["Dt_Customer"] = pd.to_datetime(data["Dt_Customer"])
# to know the last day..
data["Dt_Customer"].max()
data["last_day"] = pd.to_datetime("2014-12-06")
data["No_Days"] = (data["last_day"] - data["Dt_Customer"]).dt.days
data["No_Days"].max()
# make columns of age ..
data["age"] = 2023 - data["Year_Birth"]
pd.set_option("display.max_columns", None)
data.head(10)
# checking the ouliers in age and income columns.
plt.figure(figsize=(15, 8))
plt.subplot(1, 2, 1)
plt.xlabel = "income"
sns.boxplot(data=data, x="Income", color="brown")
plt.subplot(1, 2, 2)
plt.xlabel = "age"
sns.boxplot(data=data, x="age", color="steelblue")
# ****so there is outliers****
# delete the outliers..
# from age column
data = data[data["age"] < 80]
# from income column
data = data[data["Income"] < 150000]
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.xlabel = "income"
sns.boxplot(data=data, x="Income", color="brown")
plt.subplot(1, 2, 2)
plt.xlabel = "age"
sns.boxplot(data=data, x="age")
data["Marital_Status"].value_counts()
# handling Marital_Status column
data["relationship"] = data["Marital_Status"].replace(
{
"Married": "in_relationship",
"Together": "in_relationship",
"Single": "single",
"Divorced": "single",
"YOLO": "single",
"Absurd": "single",
"Widow": "single",
"Alone": "single",
}
)
data.columns
data["members_home"] = (
data["Kidhome"]
+ data["Teenhome"]
+ data["relationship"].replace({"single": 0, "in_relationship": 1})
)
data["AcceptedCmp"] = data["AcceptedCmp1"] + data["AcceptedCmp2"] + data["AcceptedCmp3"]
+data["AcceptedCmp4"] + data["AcceptedCmp5"] + data["Response"]
data["num_purchases"] = (
data["NumWebPurchases"]
+ data["NumCatalogPurchases"]
+ data["NumStorePurchases"]
+ data["NumDealsPurchases"]
)
data["expenses"] = data["MntWines"] + data["MntFruits"] + data["MntMeatProducts"]
+data["MntFishProducts"] + data["MntSweetProducts"] + data["MntGoldProds"]
# dropping unnecessary columns
data.drop(
labels=[
"Marital_Status",
"ID",
"last_day",
"Year_Birth",
"Dt_Customer",
"last_day",
"Kidhome",
"Teenhome",
"MntWines",
"MntFruits",
"MntMeatProducts",
"MntFishProducts",
"MntSweetProducts",
"MntGoldProds",
"NumDealsPurchases",
"NumWebPurchases",
"NumCatalogPurchases",
"NumStorePurchases",
"NumWebVisitsMonth",
"AcceptedCmp3",
"AcceptedCmp4",
"AcceptedCmp5",
"AcceptedCmp1",
"AcceptedCmp2",
"Z_CostContact",
"Z_Revenue",
"Recency",
"Complain",
],
axis=1,
inplace=True,
)
data.columns
# make some plots
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 1)
sns.histplot(data, x="age", color="darkred")
plt.subplot(1, 2, 2)
sns.histplot(data, x="Income", color="steelblue")
plt.figure(figsize=(7, 7))
sns.countplot(data, x="relationship")
# parplot of eduction
plt.plot(figsize=(10, 10))
data.Education.value_counts().plot(kind="pie")
plt.show()
# numbers of members in family.
plt.plot(figsize=(10, 10))
data.members_home.value_counts().plot(kind="pie")
plt.show()
data.columns
# # preprocess the data
data.info()
# convert education and relationship to num values..
data["Education"] = preprocessing.LabelEncoder().fit_transform(data["Education"])
data["relationship"] = preprocessing.LabelEncoder().fit_transform(data["relationship"])
# education after converting
data.head()
scaler = StandardScaler()
scaled_features = scaler.fit_transform(data.values)
scaled_data = pd.DataFrame(scaled_features, index=data.index, columns=data.columns)
# reduce features of the data to 4 ..
pca = PCA(n_components=4)
data_pca = pca.fit_transform(scaled_data)
data_pca.shape
# # clustering Time
plt.figure(figsize=(12, 8))
elbow_graph = KElbowVisualizer(KMeans(random_state=123), k=10)
elbow_graph.fit(data_pca)
elbow_graph.show()
# number of clusters is ***`5`***
kmeans = KMeans(n_clusters=5)
cluster = kmeans.fit_predict(data_pca)
data["cluster"] = cluster
cluster.min(), cluster.max()
cluster
# ploting cluster...
plt.figure(figsize=(15, 12))
sns.scatterplot(x=data_pca[:, 0], y=data_pca[:, 1], hue=cluster, s=60, palette="Set1")
plt.title("clusters in data")
# **make some plots and identify the spending `capabilities` and `income` for each cluster and comment them blow**
pl = sns.scatterplot(
data=data, x=data["expenses"], y=data["Income"], hue=data["cluster"]
)
pl.set_title("spending and icome of ecah cluster")
plt.legend()
plt.show()
sns.countplot(x=data["cluster"])
sns.histplot(x=data["cluster"], y=data["Income"])
sns.histplot(x=data["cluster"], y=data["expenses"])
|
[{"customer-personality-analysis/marketing_campaign.csv": {"column_names": "[\"ID\\tYear_Birth\\tEducation\\tMarital_Status\\tIncome\\tKidhome\\tTeenhome\\tDt_Customer\\tRecency\\tMntWines\\tMntFruits\\tMntMeatProducts\\tMntFishProducts\\tMntSweetProducts\\tMntGoldProds\\tNumDealsPurchases\\tNumWebPurchases\\tNumCatalogPurchases\\tNumStorePurchases\\tNumWebVisitsMonth\\tAcceptedCmp3\\tAcceptedCmp4\\tAcceptedCmp5\\tAcceptedCmp1\\tAcceptedCmp2\\tComplain\\tZ_CostContact\\tZ_Revenue\\tResponse\"]", "column_data_types": "{\"ID\\tYear_Birth\\tEducation\\tMarital_Status\\tIncome\\tKidhome\\tTeenhome\\tDt_Customer\\tRecency\\tMntWines\\tMntFruits\\tMntMeatProducts\\tMntFishProducts\\tMntSweetProducts\\tMntGoldProds\\tNumDealsPurchases\\tNumWebPurchases\\tNumCatalogPurchases\\tNumStorePurchases\\tNumWebVisitsMonth\\tAcceptedCmp3\\tAcceptedCmp4\\tAcceptedCmp5\\tAcceptedCmp1\\tAcceptedCmp2\\tComplain\\tZ_CostContact\\tZ_Revenue\\tResponse\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2240 entries, 0 to 2239\nData columns (total 1 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAcceptedCmp4\tAcceptedCmp5\tAcceptedCmp1\tAcceptedCmp2\tComplain\tZ_CostContact\tZ_Revenue\tResponse 2240 non-null object\ndtypes: object(1)\nmemory usage: 17.6+ KB\n", "summary": "{\"ID\\tYear_Birth\\tEducation\\tMarital_Status\\tIncome\\tKidhome\\tTeenhome\\tDt_Customer\\tRecency\\tMntWines\\tMntFruits\\tMntMeatProducts\\tMntFishProducts\\tMntSweetProducts\\tMntGoldProds\\tNumDealsPurchases\\tNumWebPurchases\\tNumCatalogPurchases\\tNumStorePurchases\\tNumWebVisitsMonth\\tAcceptedCmp3\\tAcceptedCmp4\\tAcceptedCmp5\\tAcceptedCmp1\\tAcceptedCmp2\\tComplain\\tZ_CostContact\\tZ_Revenue\\tResponse\": {\"count\": 2240, \"unique\": 2240, \"top\": \"5524\\t1957\\tGraduation\\tSingle\\t58138\\t0\\t0\\t04-09-2012\\t58\\t635\\t88\\t546\\t172\\t88\\t88\\t3\\t8\\t10\\t4\\t7\\t0\\t0\\t0\\t0\\t0\\t0\\t3\\t11\\t1\", \"freq\": 1}}", "examples": "{\"ID\\tYear_Birth\\tEducation\\tMarital_Status\\tIncome\\tKidhome\\tTeenhome\\tDt_Customer\\tRecency\\tMntWines\\tMntFruits\\tMntMeatProducts\\tMntFishProducts\\tMntSweetProducts\\tMntGoldProds\\tNumDealsPurchases\\tNumWebPurchases\\tNumCatalogPurchases\\tNumStorePurchases\\tNumWebVisitsMonth\\tAcceptedCmp3\\tAcceptedCmp4\\tAcceptedCmp5\\tAcceptedCmp1\\tAcceptedCmp2\\tComplain\\tZ_CostContact\\tZ_Revenue\\tResponse\":{\"0\":\"5524\\t1957\\tGraduation\\tSingle\\t58138\\t0\\t0\\t04-09-2012\\t58\\t635\\t88\\t546\\t172\\t88\\t88\\t3\\t8\\t10\\t4\\t7\\t0\\t0\\t0\\t0\\t0\\t0\\t3\\t11\\t1\",\"1\":\"2174\\t1954\\tGraduation\\tSingle\\t46344\\t1\\t1\\t08-03-2014\\t38\\t11\\t1\\t6\\t2\\t1\\t6\\t2\\t1\\t1\\t2\\t5\\t0\\t0\\t0\\t0\\t0\\t0\\t3\\t11\\t0\",\"2\":\"4141\\t1965\\tGraduation\\tTogether\\t71613\\t0\\t0\\t21-08-2013\\t26\\t426\\t49\\t127\\t111\\t21\\t42\\t1\\t8\\t2\\t10\\t4\\t0\\t0\\t0\\t0\\t0\\t0\\t3\\t11\\t0\",\"3\":\"6182\\t1984\\tGraduation\\tTogether\\t26646\\t1\\t0\\t10-02-2014\\t26\\t11\\t4\\t20\\t10\\t3\\t5\\t2\\t2\\t0\\t4\\t6\\t0\\t0\\t0\\t0\\t0\\t0\\t3\\t11\\t0\"}}"}}]
| true | 1 |
<start_data_description><data_path>customer-personality-analysis/marketing_campaign.csv:
<column_names>
['ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAcceptedCmp4\tAcceptedCmp5\tAcceptedCmp1\tAcceptedCmp2\tComplain\tZ_CostContact\tZ_Revenue\tResponse']
<column_types>
{'ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAcceptedCmp4\tAcceptedCmp5\tAcceptedCmp1\tAcceptedCmp2\tComplain\tZ_CostContact\tZ_Revenue\tResponse': 'object'}
<dataframe_Summary>
{'ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAcceptedCmp4\tAcceptedCmp5\tAcceptedCmp1\tAcceptedCmp2\tComplain\tZ_CostContact\tZ_Revenue\tResponse': {'count': 2240, 'unique': 2240, 'top': '5524\t1957\tGraduation\tSingle\t58138\t0\t0\t04-09-2012\t58\t635\t88\t546\t172\t88\t88\t3\t8\t10\t4\t7\t0\t0\t0\t0\t0\t0\t3\t11\t1', 'freq': 1}}
<dataframe_info>
RangeIndex: 2240 entries, 0 to 2239
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID Year_Birth Education Marital_Status Income Kidhome Teenhome Dt_Customer Recency MntWines MntFruits MntMeatProducts MntFishProducts MntSweetProducts MntGoldProds NumDealsPurchases NumWebPurchases NumCatalogPurchases NumStorePurchases NumWebVisitsMonth AcceptedCmp3 AcceptedCmp4 AcceptedCmp5 AcceptedCmp1 AcceptedCmp2 Complain Z_CostContact Z_Revenue Response 2240 non-null object
dtypes: object(1)
memory usage: 17.6+ KB
<some_examples>
{'ID\tYear_Birth\tEducation\tMarital_Status\tIncome\tKidhome\tTeenhome\tDt_Customer\tRecency\tMntWines\tMntFruits\tMntMeatProducts\tMntFishProducts\tMntSweetProducts\tMntGoldProds\tNumDealsPurchases\tNumWebPurchases\tNumCatalogPurchases\tNumStorePurchases\tNumWebVisitsMonth\tAcceptedCmp3\tAcceptedCmp4\tAcceptedCmp5\tAcceptedCmp1\tAcceptedCmp2\tComplain\tZ_CostContact\tZ_Revenue\tResponse': {'0': '5524\t1957\tGraduation\tSingle\t58138\t0\t0\t04-09-2012\t58\t635\t88\t546\t172\t88\t88\t3\t8\t10\t4\t7\t0\t0\t0\t0\t0\t0\t3\t11\t1', '1': '2174\t1954\tGraduation\tSingle\t46344\t1\t1\t08-03-2014\t38\t11\t1\t6\t2\t1\t6\t2\t1\t1\t2\t5\t0\t0\t0\t0\t0\t0\t3\t11\t0', '2': '4141\t1965\tGraduation\tTogether\t71613\t0\t0\t21-08-2013\t26\t426\t49\t127\t111\t21\t42\t1\t8\t2\t10\t4\t0\t0\t0\t0\t0\t0\t3\t11\t0', '3': '6182\t1984\tGraduation\tTogether\t26646\t1\t0\t10-02-2014\t26\t11\t4\t20\t10\t3\t5\t2\t2\t0\t4\t6\t0\t0\t0\t0\t0\t0\t3\t11\t0'}}
<end_description>
| 2,060 | 1 | 3,617 | 2,060 |
129692639
|
# Here's an end-to-end data analytics project that includes codes, visualizations, and some insightful analysis. In this project, we'll be working with a dataset containing information about online retail transactions. We'll perform data cleaning, exploratory data analysis, and generate meaningful insights from the data.
# **Step 1: Import Required Libraries**
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
## The import keyword is used to import libraries or modules in Python.
## pandas, numpy, matplotlib.pyplot, and seaborn are the libraries being imported.
## pd, np, plt, and sns are aliases or shorthand names given to these libraries to make it easier to refer to them in the code.
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# **Step 2: Load the Data**
data = pd.read_csv("../input/retail-dataset/end2endretail.csv")
data.head(10) # first 5 rows
# We load the dataset into a pandas DataFrame using the read_csv() function. Make sure to provide the correct file path and encoding if needed.
# **Step 3: Data Cleaning and Preprocessing**
# .dropna(): Removes rows with missing values from the DataFrame.
data = data.dropna()
data = data.drop_duplicates()
# Convert the invoice date to datetime format
data["InvoiceDate"] = pd.to_datetime(data["InvoiceDate"])
# Extract additional features from the invoice date
data["Year"] = data["InvoiceDate"].dt.year
data["Month"] = data["InvoiceDate"].dt.month
data["Day"] = data["InvoiceDate"].dt.day
data["Hour"] = data["InvoiceDate"].dt.hour
# Filter out negative quantity and price values
data = data[(data["Quantity"] > 0) & (data["UnitPrice"] > 0)]
# In this step, we perform data cleaning and preprocessing operations to ensure the data is in a suitable format for analysis. This includes removing missing values and duplicates, converting the invoice date to datetime format, extracting additional features from the date (e.g., year, month, day, hour), and filtering out negative quantity and price values.
# **Step 4: Exploratory Data Analysis and Visualizations**
# Top 10 countries with the highest number of transactions
top_countries = data["Country"].value_counts().head(10)
plt.figure(figsize=(12, 6))
sns.barplot(x=top_countries.index, y=top_countries.values)
plt.title("Top 10 Countries with the Highest Number of Transactions")
plt.xlabel("Country")
plt.ylabel("Number of Transactions")
plt.xticks(rotation=45)
plt.show()
# This visualization presents a count plot that illustrates the number of repeat customers based on the number of invoices they have made.
# It helps you understand the distribution of repeat customers and the frequency of their purchases.
repeat_customers = data.groupby("CustomerID")["InvoiceNo"].nunique().reset_index()
repeat_customers = repeat_customers[repeat_customers["InvoiceNo"] > 1]
plt.figure(figsize=(8, 6))
sns.countplot(data=repeat_customers, x="InvoiceNo")
plt.title("Number of Repeat Customers")
plt.xlabel("Number of Invoices")
plt.ylabel("Count")
plt.show()
# In this step, we explore the data and gain insights through visualizations. In the provided example, we demonstrate two types of visualizations:
# **Step 5: Generate More Insights
# **
# Total Quantity
total_quantity = data["Quantity"].sum()
print("Total Quantity:", total_quantity)
# Shows the total quantity per month
# **Average quantity per month**
average_quantity_per_month = data.groupby("Month")["Quantity"].mean()
plt.figure(figsize=(10, 6))
sns.barplot(x=average_quantity_per_month.index, y=average_quantity_per_month.values)
plt.title("Average Quantity per Month")
plt.xlabel("Month")
plt.ylabel("Average Quantity")
plt.show()
# Quantity Distribution by Country
quantity_by_country = (
data.groupby("Country")["Quantity"].sum().sort_values(ascending=False)
)
print("Quantity Distribution by Country:\n", quantity_by_country)
quantity_by_country = (
data.groupby("Country")["Quantity"].sum().sort_values(ascending=False)
)
plt.figure(figsize=(12, 6))
sns.barplot(x=quantity_by_country.index, y=quantity_by_country.values)
plt.title("Quantity Distribution by Country")
plt.xlabel("Country")
plt.ylabel("Quantity")
plt.xticks(rotation=45)
plt.show()
# This analysis will show the total quantity distribution by each country, helping you identify the countries that have the most to the overall quantity.
# This visualization using bar plot showing the average quantity per month. Each bar represents a month, and the height of the bar represents the average quantity for that month. It helps visualize the monthly quantity trends and identify the months with higher or lower average quantities.
# **Machine learning analysis using the retail dataset. We will use a linear regression model to predict the revenue based on selected features.**
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# Feature Selection
# Select relevant features for prediction
features = ["UnitPrice", "CustomerID"]
# Split the data into features (X) and target variable (y)
X = data[features]
y = data["Quantity"]
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create an instance of the Linear Regression model
model = LinearRegression()
# Fit the model on the training data
model.fit(X_train, y_train)
# Use the trained model to make predictions on the test data
y_pred = model.predict(X_test)
# View the predicted values
print(y_pred)
# Calculate the mean squared error
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error:", mse)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692639.ipynb
| null | null |
[{"Id": 129692639, "ScriptId": 38484716, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6455124, "CreationDate": "05/15/2023 19:44:45", "VersionNumber": 3.0, "Title": "retail dataset analysis", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 149.0, "LinesInsertedFromPrevious": 13.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 136.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# Here's an end-to-end data analytics project that includes codes, visualizations, and some insightful analysis. In this project, we'll be working with a dataset containing information about online retail transactions. We'll perform data cleaning, exploratory data analysis, and generate meaningful insights from the data.
# **Step 1: Import Required Libraries**
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
## The import keyword is used to import libraries or modules in Python.
## pandas, numpy, matplotlib.pyplot, and seaborn are the libraries being imported.
## pd, np, plt, and sns are aliases or shorthand names given to these libraries to make it easier to refer to them in the code.
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# **Step 2: Load the Data**
data = pd.read_csv("../input/retail-dataset/end2endretail.csv")
data.head(10) # first 5 rows
# We load the dataset into a pandas DataFrame using the read_csv() function. Make sure to provide the correct file path and encoding if needed.
# **Step 3: Data Cleaning and Preprocessing**
# .dropna(): Removes rows with missing values from the DataFrame.
data = data.dropna()
data = data.drop_duplicates()
# Convert the invoice date to datetime format
data["InvoiceDate"] = pd.to_datetime(data["InvoiceDate"])
# Extract additional features from the invoice date
data["Year"] = data["InvoiceDate"].dt.year
data["Month"] = data["InvoiceDate"].dt.month
data["Day"] = data["InvoiceDate"].dt.day
data["Hour"] = data["InvoiceDate"].dt.hour
# Filter out negative quantity and price values
data = data[(data["Quantity"] > 0) & (data["UnitPrice"] > 0)]
# In this step, we perform data cleaning and preprocessing operations to ensure the data is in a suitable format for analysis. This includes removing missing values and duplicates, converting the invoice date to datetime format, extracting additional features from the date (e.g., year, month, day, hour), and filtering out negative quantity and price values.
# **Step 4: Exploratory Data Analysis and Visualizations**
# Top 10 countries with the highest number of transactions
top_countries = data["Country"].value_counts().head(10)
plt.figure(figsize=(12, 6))
sns.barplot(x=top_countries.index, y=top_countries.values)
plt.title("Top 10 Countries with the Highest Number of Transactions")
plt.xlabel("Country")
plt.ylabel("Number of Transactions")
plt.xticks(rotation=45)
plt.show()
# This visualization presents a count plot that illustrates the number of repeat customers based on the number of invoices they have made.
# It helps you understand the distribution of repeat customers and the frequency of their purchases.
repeat_customers = data.groupby("CustomerID")["InvoiceNo"].nunique().reset_index()
repeat_customers = repeat_customers[repeat_customers["InvoiceNo"] > 1]
plt.figure(figsize=(8, 6))
sns.countplot(data=repeat_customers, x="InvoiceNo")
plt.title("Number of Repeat Customers")
plt.xlabel("Number of Invoices")
plt.ylabel("Count")
plt.show()
# In this step, we explore the data and gain insights through visualizations. In the provided example, we demonstrate two types of visualizations:
# **Step 5: Generate More Insights
# **
# Total Quantity
total_quantity = data["Quantity"].sum()
print("Total Quantity:", total_quantity)
# Shows the total quantity per month
# **Average quantity per month**
average_quantity_per_month = data.groupby("Month")["Quantity"].mean()
plt.figure(figsize=(10, 6))
sns.barplot(x=average_quantity_per_month.index, y=average_quantity_per_month.values)
plt.title("Average Quantity per Month")
plt.xlabel("Month")
plt.ylabel("Average Quantity")
plt.show()
# Quantity Distribution by Country
quantity_by_country = (
data.groupby("Country")["Quantity"].sum().sort_values(ascending=False)
)
print("Quantity Distribution by Country:\n", quantity_by_country)
quantity_by_country = (
data.groupby("Country")["Quantity"].sum().sort_values(ascending=False)
)
plt.figure(figsize=(12, 6))
sns.barplot(x=quantity_by_country.index, y=quantity_by_country.values)
plt.title("Quantity Distribution by Country")
plt.xlabel("Country")
plt.ylabel("Quantity")
plt.xticks(rotation=45)
plt.show()
# This analysis will show the total quantity distribution by each country, helping you identify the countries that have the most to the overall quantity.
# This visualization using bar plot showing the average quantity per month. Each bar represents a month, and the height of the bar represents the average quantity for that month. It helps visualize the monthly quantity trends and identify the months with higher or lower average quantities.
# **Machine learning analysis using the retail dataset. We will use a linear regression model to predict the revenue based on selected features.**
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# Feature Selection
# Select relevant features for prediction
features = ["UnitPrice", "CustomerID"]
# Split the data into features (X) and target variable (y)
X = data[features]
y = data["Quantity"]
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Create an instance of the Linear Regression model
model = LinearRegression()
# Fit the model on the training data
model.fit(X_train, y_train)
# Use the trained model to make predictions on the test data
y_pred = model.predict(X_test)
# View the predicted values
print(y_pred)
# Calculate the mean squared error
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error:", mse)
| false | 0 | 1,527 | 0 | 1,527 | 1,527 |
||
129692841
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import plotly.express as px
import numpy as np
import pandas as pd
import os
import seaborn as sns
# There are batches of datasets
# and train_meta.parquet, test_meta ans sensor geometry.
# Let's explore these.
# ### exploring the sensor geometry
sensor_geo = pd.read_csv(
"/kaggle/input/icecube-neutrinos-in-deep-ice/sensor_geometry.csv"
)
sensor_geo.describe()
# There are
# * 5160 sensors
# * across x = 576 to x = - 570, y = -521 to + 509, z = -512 to +524
sensor_geo
print(sensor_geo.nunique())
# * Seems to be spread acorss x and y but along the z axis
sensor_geo.head()
fig = px.scatter_3d(
x=sensor_geo["x"],
y=sensor_geo["y"],
z=sensor_geo["z"],
color=sensor_geo["z"],
opacity=0.5,
) # mode = 'markers',
fig.update_traces(marker_size=2)
fig.show()
# Shows where the sensors arew placed
# Loading the dataset and looking how they look
train_meta = pd.read_parquet(
"/kaggle/input/icecube-neutrinos-in-deep-ice/train_meta.parquet"
)
# each batch has 200000 values
# There are 660 batches
# There is train_meta which has information about the batch_id, event_id, first_pulse, last_pulse, azimuth, zenith angles
# bath_ids 0->660
# There are 200,000 of each. With each of these batches we see first_pulse_index = last_pulse+1
train_meta.head()
batch_1 = pd.read_parquet(
"/kaggle/input/icecube-neutrinos-in-deep-ice/train/batch_1.parquet"
)
batch_1.head()
batch_1.tail()
# Let's compare the batch_! dataset with the meta dataset that has batch id of 1
# All the event ids from meta atleast for batch 1 is unique.
batch_2 = pd.read_parquet(
"/kaggle/input/icecube-neutrinos-in-deep-ice/train/batch_2.parquet"
)
batch_2.head()
batch_2.tail()
# let's look at the batch number 1 specific information from the train set
batch1FromTrain = train_meta[train_meta["batch_id"] == 1]
batch1FromTrain.head()
# Each training dataset has batches and events information and each batch df has sensor id information,
batch_1.loc[24]
# we see that the event id 24 gives 61 rows which is same as the lastpulse index - first plulse index.
# for every pulse we get the time when it happens, charge deposition, and sensor_id information that is the detector triggered.
# Let's add this information on the training data
batch1FromTrain["nTimes"] = (
batch1FromTrain["last_pulse_index"] - batch1FromTrain["first_pulse_index"] + 1
)
batch1FromTrain.head()
batch1FromTrain.nTimes.value_counts()
# it seems for batch 1 the max nTimes is 47
batch1FromTrain[batch1FromTrain["nTimes"] == 47]
# let's randomly choose the 1st row i.e event id corresponding to 1676
# look at infomration from batch 1
# resetiing index
batch_1.reset_index()
# batch_1.loc[1676]
# there should be 47 rows, givinh information of what sensors are triggered.
# there positions and angles it makes
# merging the dataframes
sns.histplot(batch_1["sensor_id"])
# we observe that various sensor ids are
# for a particular batch let's look at the event id and what could it possibly mean
# There are batch ids mentioned in the train_meta that are
# * 660 batch ids
# * these batch id have event ids and, first_pulse_index and last_pulse_index for a specific event
# * Frequency of event_id in a fixed batch is same as first_pulse_index - last_pulse_index
# * These event_id stores the information of all the sensor numbers that were triggered, at time t and deposits charge e
# We have
# * train_meta -> batches, the angle with it enters for an event
# * batch_n -> contains all the sensor number that gets triggered with increasing time and, charge deposited
# * sensor_geometry -> sensors id and where they are located
train_meta["nTimes"] = (train_meta.last_pulse_index - train_meta.first_pulse_index) + 1
train_meta.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692841.ipynb
| null | null |
[{"Id": 129692841, "ScriptId": 34806104, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10548576, "CreationDate": "05/15/2023 19:46:57", "VersionNumber": 1.0, "Title": "ice-cube", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 145.0, "LinesInsertedFromPrevious": 145.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import plotly.express as px
import numpy as np
import pandas as pd
import os
import seaborn as sns
# There are batches of datasets
# and train_meta.parquet, test_meta ans sensor geometry.
# Let's explore these.
# ### exploring the sensor geometry
sensor_geo = pd.read_csv(
"/kaggle/input/icecube-neutrinos-in-deep-ice/sensor_geometry.csv"
)
sensor_geo.describe()
# There are
# * 5160 sensors
# * across x = 576 to x = - 570, y = -521 to + 509, z = -512 to +524
sensor_geo
print(sensor_geo.nunique())
# * Seems to be spread acorss x and y but along the z axis
sensor_geo.head()
fig = px.scatter_3d(
x=sensor_geo["x"],
y=sensor_geo["y"],
z=sensor_geo["z"],
color=sensor_geo["z"],
opacity=0.5,
) # mode = 'markers',
fig.update_traces(marker_size=2)
fig.show()
# Shows where the sensors arew placed
# Loading the dataset and looking how they look
train_meta = pd.read_parquet(
"/kaggle/input/icecube-neutrinos-in-deep-ice/train_meta.parquet"
)
# each batch has 200000 values
# There are 660 batches
# There is train_meta which has information about the batch_id, event_id, first_pulse, last_pulse, azimuth, zenith angles
# bath_ids 0->660
# There are 200,000 of each. With each of these batches we see first_pulse_index = last_pulse+1
train_meta.head()
batch_1 = pd.read_parquet(
"/kaggle/input/icecube-neutrinos-in-deep-ice/train/batch_1.parquet"
)
batch_1.head()
batch_1.tail()
# Let's compare the batch_! dataset with the meta dataset that has batch id of 1
# All the event ids from meta atleast for batch 1 is unique.
batch_2 = pd.read_parquet(
"/kaggle/input/icecube-neutrinos-in-deep-ice/train/batch_2.parquet"
)
batch_2.head()
batch_2.tail()
# let's look at the batch number 1 specific information from the train set
batch1FromTrain = train_meta[train_meta["batch_id"] == 1]
batch1FromTrain.head()
# Each training dataset has batches and events information and each batch df has sensor id information,
batch_1.loc[24]
# we see that the event id 24 gives 61 rows which is same as the lastpulse index - first plulse index.
# for every pulse we get the time when it happens, charge deposition, and sensor_id information that is the detector triggered.
# Let's add this information on the training data
batch1FromTrain["nTimes"] = (
batch1FromTrain["last_pulse_index"] - batch1FromTrain["first_pulse_index"] + 1
)
batch1FromTrain.head()
batch1FromTrain.nTimes.value_counts()
# it seems for batch 1 the max nTimes is 47
batch1FromTrain[batch1FromTrain["nTimes"] == 47]
# let's randomly choose the 1st row i.e event id corresponding to 1676
# look at infomration from batch 1
# resetiing index
batch_1.reset_index()
# batch_1.loc[1676]
# there should be 47 rows, givinh information of what sensors are triggered.
# there positions and angles it makes
# merging the dataframes
sns.histplot(batch_1["sensor_id"])
# we observe that various sensor ids are
# for a particular batch let's look at the event id and what could it possibly mean
# There are batch ids mentioned in the train_meta that are
# * 660 batch ids
# * these batch id have event ids and, first_pulse_index and last_pulse_index for a specific event
# * Frequency of event_id in a fixed batch is same as first_pulse_index - last_pulse_index
# * These event_id stores the information of all the sensor numbers that were triggered, at time t and deposits charge e
# We have
# * train_meta -> batches, the angle with it enters for an event
# * batch_n -> contains all the sensor number that gets triggered with increasing time and, charge deposited
# * sensor_geometry -> sensors id and where they are located
train_meta["nTimes"] = (train_meta.last_pulse_index - train_meta.first_pulse_index) + 1
train_meta.head()
| false | 0 | 1,394 | 2 | 1,394 | 1,394 |
||
129692039
|
<jupyter_start><jupyter_text>Wild blueberry Yield Prediction Dataset
### Context
Blueberries are perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Vaccinium also includes cranberries, bilberries, huckleberries, and Madeira blueberries. Commercial blueberries—both wild (lowbush) and cultivated (highbush)—are all native to North America. The highbush varieties were introduced into Europe during the 1930s.
Blueberries are usually prostrate shrubs that can vary in size from 10 centimeters (4 inches) to 4 meters (13 feet) in height. In the commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as "lowbush blueberries" (synonymous with "wild"), while the species with larger berries growing on taller, cultivated bushes are known as "highbush blueberries". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world s supply of highbush blueberries.
### Content
"The dataset used for predictive modeling was generated by the Wild Blueberry Pollination Simulation Model, which is an open-source, spatially-explicit computer simulation program that enables exploration of how various factors, including plant spatial arrangement, outcrossing and self-pollination, bee species compositions and weather conditions, in isolation and combination, affect pollination efficiency and yield of the wild blueberry agroecosystem. The simulation model has been validated by the field observation and experimental data collected in Maine USA and Canadian Maritimes during the last 30 years and now is a useful tool for hypothesis testing and theory development for wild blueberry pollination researches."
Features Unit Description
Clonesize m2 The average blueberry clone size in the field
Honeybee bees/m2/min Honeybee density in the field
Bumbles bees/m2/min Bumblebee density in the field
Andrena bees/m2/min Andrena bee density in the field
Osmia bees/m2/min Osmia bee density in the field
MaxOfUpperTRange ℃ The highest record of the upper band daily air temperature during the bloom season
MinOfUpperTRange ℃ The lowest record of the upper band daily air temperature
AverageOfUpperTRange ℃ The average of the upper band daily air temperature
MaxOfLowerTRange ℃ The highest record of the lower band daily air temperature
MinOfLowerTRange ℃ The lowest record of the lower band daily air temperature
AverageOfLowerTRange ℃ The average of the lower band daily air temperature
RainingDays Day The total number of days during the bloom season, each of which has precipitation larger than zero
AverageRainingDays Day The average of raining days of the entire bloom season
Kaggle dataset identifier: wild-blueberry-yield-prediction-dataset
<jupyter_script>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xgboost as xgb
import lightgbm as lgb
import os
import seaborn as sns
from catboost import CatBoostRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv", index_col="id")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv", index_col="id")
df = pd.read_csv(
"/kaggle/input/wild-blueberry-yield-prediction-dataset/WildBlueberryPollinationSimulationData.csv"
)
df.drop("Row#", axis=1, inplace=True)
train.isnull().sum()
train.head()
train.describe()
fig, axes = plt.subplots(
nrows=len(train.select_dtypes(include="number").columns), ncols=4, figsize=(18, 80)
)
axes = axes.flatten()
i = 0
for col in train.select_dtypes(include="number").columns:
sns.kdeplot(train[col], label="train", ax=axes[i], fill=False)
sns.histplot(train[col], label="train", ax=axes[i + 1], stat="density", bins=50)
sns.kdeplot(df[col], label="original", ax=axes[i], fill=False, color="green")
sns.histplot(
df[col],
label="original",
ax=axes[i + 1],
stat="density",
bins=50,
color="green",
)
if col != "yield":
sns.kdeplot(test[col], label="test", ax=axes[i], fill=False)
sns.histplot(test[col], label="test", ax=axes[i + 1], stat="density", bins=50)
if col != "yield":
tmp_data = pd.DataFrame(
{"train": train[col], "test": test[col], "original": df[col]}
)
sns.boxplot(data=tmp_data, ax=axes[i + 2])
else:
tmp_data = pd.DataFrame({"train": train[col], "original": df[col]})
custom_palette = ["blue", "green"]
sns.boxplot(data=tmp_data, ax=axes[i + 2], palette=custom_palette)
axes[i + 2].set_xlabel(col)
sns.scatterplot(x=col, y="yield", label="train", ax=axes[i + 3], data=train)
sns.scatterplot(
x=col, y="yield", label="original", ax=axes[i + 3], data=df, color="green"
)
axes[i].legend()
axes[i + 1].legend()
axes[i + 3].legend()
i += 4
plt.show()
def show_honeybee_outlier_count(df: pd.DataFrame):
honeybee_data = df["honeybee"]
total = honeybee_data.count()
below_one = honeybee_data[honeybee_data < 1].count()
above_17 = honeybee_data[honeybee_data > 17].count()
one_to_17 = honeybee_data[honeybee_data > 1].count() - above_17
print(f" Total:\t\t {total}")
print(f" x < 1:\t\t {below_one}")
print(f" 1 < x < 17: {one_to_17}")
print(f" 17 < x:\t {above_17}")
print("Train")
show_honeybee_outlier_count(train)
print("Test")
show_honeybee_outlier_count(test)
print("Original")
show_honeybee_outlier_count(df)
def show_feature_correlation(df: pd.DataFrame, title: str):
plt.figure(figsize=(20, 20))
corr_matrix = df.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix, dtype=bool)
mask[np.triu_indices_from(mask)] = True
sns.heatmap(corr_matrix, cmap="coolwarm", annot=True, mask=mask)
plt.title(title)
plt.show()
show_feature_correlation(train, "Train")
show_feature_correlation(test, "Test")
show_feature_correlation(df, "Original")
# Handle outliers
use_og_data = True
if use_og_data:
train = pd.concat([train, df])
# df_train = df_train[df_train['honeybee'] < 1]
train.reset_index(drop=True, inplace=True)
# df_test[df_test['honeybee'] > 1] = df_train['honeybee'].mean()
train["fruit_seed"] = train["fruitset"] * train["seeds"]
test["fruit_seed"] = test["fruitset"] * test["seeds"]
train["pollinators"] = (
train["honeybee"] + train["bumbles"] + train["andrena"] + train["osmia"]
)
test["pollinators"] = (
test["honeybee"] + test["bumbles"] + test["andrena"] + test["osmia"]
)
# Remove perfectly correlated features
features_to_remove = [
"RainingDays",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"honeybee",
"bumbles",
"andrena",
"osmia",
]
train.drop(features_to_remove, axis=1, inplace=True)
test.drop(features_to_remove, axis=1, inplace=True)
# Scale features
standard_scaler = StandardScaler()
X = train.drop(columns=["yield"])
X_scaled = standard_scaler.fit_transform(X)
df_train = pd.concat(
[pd.DataFrame(X_scaled, columns=X.columns), train["yield"]], axis=1
)
X_scaled = standard_scaler.transform(test)
test = pd.DataFrame(X_scaled, columns=X.columns)
df_train
models: list[str] = [
"LightGBM",
"CatBoost",
"XGBoost",
"NeuralNetwork",
"GradientBoostingRegressor",
"HuberRegressor",
"AdaBoostRegressor",
"RandomForestRegressor",
"ARDRegression",
"PLSRegression",
"ExtraTreesRegressor",
]
df_train.columns
X = df_train[
[
"clonesize",
"AverageOfLowerTRange",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
"fruit_seed",
"pollinators",
]
]
y = df_train["yield"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.9, random_state=0
)
import catboost
cat_model = catboost.CatBoostRegressor()
cat_model.fit(X_train, y_train, verbose=False)
# Predict on the test set and calculate MAE
cat_pred = cat_model.predict(X_test)
cat_mae = mean_absolute_error(y_test, cat_pred)
lgb_model = lgb.LGBMRegressor()
lgb_model.fit(X_train, y_train)
# Predict on the test set and calculate MAE
lgb_pred = lgb_model.predict(X_test)
lgb_mae = mean_absolute_error(y_test, lgb_pred)
xgb_model = xgb.XGBRegressor()
xgb_model.fit(X_train, y_train, verbose=False)
# Predict on the test set and calculate MAE
xgb_pred = xgb_model.predict(X_test)
xgb_mae = mean_absolute_error(y_test, xgb_pred)
# Fit Random Forest model
rf_model = RandomForestRegressor()
rf_model.fit(X_train, y_train)
# Predict on the test set and calculate MAE
rf_pred = rf_model.predict(X_test)
rf_mae = mean_absolute_error(y_test, rf_pred)
gbr_model = GradientBoostingRegressor()
gbr_model.fit(X_train, y_train)
gbr_pred = gbr_model.predict(X_test)
gbr_mae = mean_absolute_error(y_test, gbr_pred)
# Print the results
print("CatBoost MAE: {:.2f}".format(cat_mae))
print("LightGBM MAE: {:.2f}".format(lgb_mae))
print("Random Forest MAE: {:.2f}".format(rf_mae))
print("XGBoost MAE: {:.2f}".format(xgb_mae))
print("GradientBoostingRegressor MAE: {:.2f}".format(gbr_mae))
model = RandomForestRegressor(
n_estimators=1000,
min_samples_split=5,
min_samples_leaf=10,
max_features="log2",
max_depth=13,
criterion="absolute_error",
bootstrap=True,
)
model.fit(X_train, y_train)
preds = model.predict(X_test)
preds
mae = mean_absolute_error(y_test, preds)
print(mae)
test
model.fit(X, y)
predictions = model.predict(test)
predictions
final = pd.DataFrame()
# final.index = id
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
final.index = test["id"]
final["yield"] = predictions
final.to_csv("submission.csv")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692039.ipynb
|
wild-blueberry-yield-prediction-dataset
|
shashwatwork
|
[{"Id": 129692039, "ScriptId": 38537243, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10554992, "CreationDate": "05/15/2023 19:37:44", "VersionNumber": 6.0, "Title": "Blueberry prediction", "EvaluationDate": "05/15/2023", "IsChange": false, "TotalLines": 201.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 201.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186017909, "KernelVersionId": 129692039, "SourceDatasetVersionId": 2462316}]
|
[{"Id": 2462316, "DatasetId": 1490445, "DatasourceVersionId": 2504743, "CreatorUserId": 1444085, "LicenseName": "Attribution 4.0 International (CC BY 4.0)", "CreationDate": "07/25/2021 17:48:21", "VersionNumber": 2.0, "Title": "Wild blueberry Yield Prediction Dataset", "Slug": "wild-blueberry-yield-prediction-dataset", "Subtitle": "Predict the yield of Wild Blueberry", "Description": "### Context\n\nBlueberries are perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Vaccinium also includes cranberries, bilberries, huckleberries, and Madeira blueberries. Commercial blueberries\u2014both wild (lowbush) and cultivated (highbush)\u2014are all native to North America. The highbush varieties were introduced into Europe during the 1930s.\n\nBlueberries are usually prostrate shrubs that can vary in size from 10 centimeters (4 inches) to 4 meters (13 feet) in height. In the commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as \"lowbush blueberries\" (synonymous with \"wild\"), while the species with larger berries growing on taller, cultivated bushes are known as \"highbush blueberries\". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world s supply of highbush blueberries.\n\n### Content\n\n\"The dataset used for predictive modeling was generated by the Wild Blueberry Pollination Simulation Model, which is an open-source, spatially-explicit computer simulation program that enables exploration of how various factors, including plant spatial arrangement, outcrossing and self-pollination, bee species compositions and weather conditions, in isolation and combination, affect pollination efficiency and yield of the wild blueberry agroecosystem. The simulation model has been validated by the field observation and experimental data collected in Maine USA and Canadian Maritimes during the last 30 years and now is a useful tool for hypothesis testing and theory development for wild blueberry pollination researches.\"\n\nFeatures \tUnit\tDescription\nClonesize\tm2\tThe average blueberry clone size in the field\nHoneybee\tbees/m2/min\tHoneybee density in the field\nBumbles\tbees/m2/min\tBumblebee density in the field\nAndrena\tbees/m2/min\tAndrena bee density in the field\nOsmia\tbees/m2/min\tOsmia bee density in the field\nMaxOfUpperTRange\t\u2103\tThe highest record of the upper band daily air temperature during the bloom season\nMinOfUpperTRange\t\u2103\tThe lowest record of the upper band daily air temperature\nAverageOfUpperTRange\t\u2103\tThe average of the upper band daily air temperature\nMaxOfLowerTRange\t\u2103\tThe highest record of the lower band daily air temperature\nMinOfLowerTRange\t\u2103\tThe lowest record of the lower band daily air temperature\nAverageOfLowerTRange\t\u2103\tThe average of the lower band daily air temperature\nRainingDays\tDay\tThe total number of days during the bloom season, each of which has precipitation larger than zero\nAverageRainingDays\tDay\tThe average of raining days of the entire bloom season\n\n### Acknowledgements\n\nQu, Hongchun; Obsie, Efrem; Drummond, Frank (2020), \u201cData for: Wild blueberry yield prediction using a combination of computer simulation and machine learning algorithms\u201d, Mendeley Data, V1, doi: 10.17632/p5hvjzsvn8.1\n\nDataset is outsourced from [here.](https://data.mendeley.com/datasets/p5hvjzsvn8/1)", "VersionNotes": "updated", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1490445, "CreatorUserId": 1444085, "OwnerUserId": 1444085.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2462316.0, "CurrentDatasourceVersionId": 2504743.0, "ForumId": 1510148, "Type": 2, "CreationDate": "07/25/2021 17:47:00", "LastActivityDate": "07/25/2021", "TotalViews": 11876, "TotalDownloads": 1130, "TotalVotes": 48, "TotalKernels": 82}]
|
[{"Id": 1444085, "UserName": "shashwatwork", "DisplayName": "Shashwat Tiwari", "RegisterDate": "11/24/2017", "PerformanceTier": 2}]
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xgboost as xgb
import lightgbm as lgb
import os
import seaborn as sns
from catboost import CatBoostRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
train = pd.read_csv("/kaggle/input/playground-series-s3e14/train.csv", index_col="id")
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv", index_col="id")
df = pd.read_csv(
"/kaggle/input/wild-blueberry-yield-prediction-dataset/WildBlueberryPollinationSimulationData.csv"
)
df.drop("Row#", axis=1, inplace=True)
train.isnull().sum()
train.head()
train.describe()
fig, axes = plt.subplots(
nrows=len(train.select_dtypes(include="number").columns), ncols=4, figsize=(18, 80)
)
axes = axes.flatten()
i = 0
for col in train.select_dtypes(include="number").columns:
sns.kdeplot(train[col], label="train", ax=axes[i], fill=False)
sns.histplot(train[col], label="train", ax=axes[i + 1], stat="density", bins=50)
sns.kdeplot(df[col], label="original", ax=axes[i], fill=False, color="green")
sns.histplot(
df[col],
label="original",
ax=axes[i + 1],
stat="density",
bins=50,
color="green",
)
if col != "yield":
sns.kdeplot(test[col], label="test", ax=axes[i], fill=False)
sns.histplot(test[col], label="test", ax=axes[i + 1], stat="density", bins=50)
if col != "yield":
tmp_data = pd.DataFrame(
{"train": train[col], "test": test[col], "original": df[col]}
)
sns.boxplot(data=tmp_data, ax=axes[i + 2])
else:
tmp_data = pd.DataFrame({"train": train[col], "original": df[col]})
custom_palette = ["blue", "green"]
sns.boxplot(data=tmp_data, ax=axes[i + 2], palette=custom_palette)
axes[i + 2].set_xlabel(col)
sns.scatterplot(x=col, y="yield", label="train", ax=axes[i + 3], data=train)
sns.scatterplot(
x=col, y="yield", label="original", ax=axes[i + 3], data=df, color="green"
)
axes[i].legend()
axes[i + 1].legend()
axes[i + 3].legend()
i += 4
plt.show()
def show_honeybee_outlier_count(df: pd.DataFrame):
honeybee_data = df["honeybee"]
total = honeybee_data.count()
below_one = honeybee_data[honeybee_data < 1].count()
above_17 = honeybee_data[honeybee_data > 17].count()
one_to_17 = honeybee_data[honeybee_data > 1].count() - above_17
print(f" Total:\t\t {total}")
print(f" x < 1:\t\t {below_one}")
print(f" 1 < x < 17: {one_to_17}")
print(f" 17 < x:\t {above_17}")
print("Train")
show_honeybee_outlier_count(train)
print("Test")
show_honeybee_outlier_count(test)
print("Original")
show_honeybee_outlier_count(df)
def show_feature_correlation(df: pd.DataFrame, title: str):
plt.figure(figsize=(20, 20))
corr_matrix = df.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix, dtype=bool)
mask[np.triu_indices_from(mask)] = True
sns.heatmap(corr_matrix, cmap="coolwarm", annot=True, mask=mask)
plt.title(title)
plt.show()
show_feature_correlation(train, "Train")
show_feature_correlation(test, "Test")
show_feature_correlation(df, "Original")
# Handle outliers
use_og_data = True
if use_og_data:
train = pd.concat([train, df])
# df_train = df_train[df_train['honeybee'] < 1]
train.reset_index(drop=True, inplace=True)
# df_test[df_test['honeybee'] > 1] = df_train['honeybee'].mean()
train["fruit_seed"] = train["fruitset"] * train["seeds"]
test["fruit_seed"] = test["fruitset"] * test["seeds"]
train["pollinators"] = (
train["honeybee"] + train["bumbles"] + train["andrena"] + train["osmia"]
)
test["pollinators"] = (
test["honeybee"] + test["bumbles"] + test["andrena"] + test["osmia"]
)
# Remove perfectly correlated features
features_to_remove = [
"RainingDays",
"MaxOfUpperTRange",
"MinOfUpperTRange",
"AverageOfUpperTRange",
"MaxOfLowerTRange",
"MinOfLowerTRange",
"honeybee",
"bumbles",
"andrena",
"osmia",
]
train.drop(features_to_remove, axis=1, inplace=True)
test.drop(features_to_remove, axis=1, inplace=True)
# Scale features
standard_scaler = StandardScaler()
X = train.drop(columns=["yield"])
X_scaled = standard_scaler.fit_transform(X)
df_train = pd.concat(
[pd.DataFrame(X_scaled, columns=X.columns), train["yield"]], axis=1
)
X_scaled = standard_scaler.transform(test)
test = pd.DataFrame(X_scaled, columns=X.columns)
df_train
models: list[str] = [
"LightGBM",
"CatBoost",
"XGBoost",
"NeuralNetwork",
"GradientBoostingRegressor",
"HuberRegressor",
"AdaBoostRegressor",
"RandomForestRegressor",
"ARDRegression",
"PLSRegression",
"ExtraTreesRegressor",
]
df_train.columns
X = df_train[
[
"clonesize",
"AverageOfLowerTRange",
"AverageRainingDays",
"fruitset",
"fruitmass",
"seeds",
"fruit_seed",
"pollinators",
]
]
y = df_train["yield"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.9, random_state=0
)
import catboost
cat_model = catboost.CatBoostRegressor()
cat_model.fit(X_train, y_train, verbose=False)
# Predict on the test set and calculate MAE
cat_pred = cat_model.predict(X_test)
cat_mae = mean_absolute_error(y_test, cat_pred)
lgb_model = lgb.LGBMRegressor()
lgb_model.fit(X_train, y_train)
# Predict on the test set and calculate MAE
lgb_pred = lgb_model.predict(X_test)
lgb_mae = mean_absolute_error(y_test, lgb_pred)
xgb_model = xgb.XGBRegressor()
xgb_model.fit(X_train, y_train, verbose=False)
# Predict on the test set and calculate MAE
xgb_pred = xgb_model.predict(X_test)
xgb_mae = mean_absolute_error(y_test, xgb_pred)
# Fit Random Forest model
rf_model = RandomForestRegressor()
rf_model.fit(X_train, y_train)
# Predict on the test set and calculate MAE
rf_pred = rf_model.predict(X_test)
rf_mae = mean_absolute_error(y_test, rf_pred)
gbr_model = GradientBoostingRegressor()
gbr_model.fit(X_train, y_train)
gbr_pred = gbr_model.predict(X_test)
gbr_mae = mean_absolute_error(y_test, gbr_pred)
# Print the results
print("CatBoost MAE: {:.2f}".format(cat_mae))
print("LightGBM MAE: {:.2f}".format(lgb_mae))
print("Random Forest MAE: {:.2f}".format(rf_mae))
print("XGBoost MAE: {:.2f}".format(xgb_mae))
print("GradientBoostingRegressor MAE: {:.2f}".format(gbr_mae))
model = RandomForestRegressor(
n_estimators=1000,
min_samples_split=5,
min_samples_leaf=10,
max_features="log2",
max_depth=13,
criterion="absolute_error",
bootstrap=True,
)
model.fit(X_train, y_train)
preds = model.predict(X_test)
preds
mae = mean_absolute_error(y_test, preds)
print(mae)
test
model.fit(X, y)
predictions = model.predict(test)
predictions
final = pd.DataFrame()
# final.index = id
test = pd.read_csv("/kaggle/input/playground-series-s3e14/test.csv")
final.index = test["id"]
final["yield"] = predictions
final.to_csv("submission.csv")
| false | 3 | 2,478 | 0 | 3,215 | 2,478 |
||
129692923
|
import pandas as pd
import matplotlib.pyplot as plt
import plotly.express as px
df1 = pd.read_csv("/kaggle/input/brasil-real-estate/brasil-real-estate-1.csv")
df1.head()
df1.info()
df1.shape
df1.dropna(inplace=True)
df1.shape
df1[["lat", "lon"]] = df1["lat-lon"].str.split(",", expand=True).astype(float)
df1
df1["place_with_parent_names"].head()
df1["place_with_parent_names"].str.split("|", expand=True).head()
df1["state"] = df1["place_with_parent_names"].str.split("|", expand=True)[2]
df1
df1["price_usd"].head()
df1["price_usd"] = (
df1["price_usd"]
.str.replace("$", "", regex=False)
.str.replace(",", "")
.astype(float)
)
df1["area_m2"] = df1["area_m2"].str.replace("$", "", regex=False).astype(float)
df1.drop(columns=["lat-lon", "place_with_parent_names"], inplace=True)
df1.head()
df2 = pd.read_csv("/kaggle/input/brasil-real-estate/brasil-real-estate-2.csv")
df2.head()
df2.info()
df2["price_usd"] = df2["price_brl"] / (3.19)
df2["price_usd"]
df2.dropna(inplace=True)
df2.drop(columns="price_brl", inplace=True)
df2.head()
df = pd.concat([df1, df2])
print("df shape:", df.shape)
df.head()
fig = px.scatter_mapbox(
df,
lat="lat",
lon="lon",
center={"lat": -14.2, "lon": -51.9}, # Map will be centered on Brazil
width=600,
height=600,
hover_data=["price_usd"], # Display price when hovering mouse over house
)
fig.update_layout(mapbox_style="open-street-map")
fig.show()
summary_stats = df[["area_m2", "price_usd"]].describe()
summary_stats
# Build histogram
plt.hist(df["price_usd"])
# Label axes
plt.xlabel("Price [USD]")
plt.ylabel("Frequency")
# Add title
plt.title("Distribution of Home Prices")
# Build box plot
plt.boxplot(df["area_m2"], vert=False)
# Label x-axis
plt.xlabel("Area [sq meters]")
# Add title
plt.title("Distribution of Home Sizes")
mean_price_by_region = df.groupby("region")["price_usd"].mean().sort_values()
mean_price_by_region.head()
# Build bar chart, label axes, add title
mean_price_by_region.plot(
kind="bar",
xlabel="Region",
ylabel="Mean Price [USD]",
title="Mean Home Price by Region",
)
df_south = df[df["region"] == "South"]
df_south.head()
homes_by_state = df_south["state"].value_counts()
homes_by_state
# Subset data
df_south_rgs = df_south[df_south["state"] == "Rio Grande do Sul"]
# Build scatter plot
plt.scatter(x=df_south_rgs["area_m2"], y=df_south_rgs["price_usd"])
# Label axes
plt.xlabel("Area [sq meters]")
# plt.ylabel("Price [USD]")
# Add title
plt.title("Rio Grande do Sul: Price vs. Area")
df_rio = df_south[df_south["state"] == "Rio Grande do Sul"]
df_santa = df_south[df_south["state"] == "Santa Catarina"]
df_parana = df_south[df_south["state"] == "Paraná"]
rio_corr = df_rio["area_m2"].corr(df_rio["price_usd"])
santa_corr = df_santa["area_m2"].corr(df_santa["price_usd"])
parana_corr = df_parana["area_m2"].corr(df_parana["price_usd"])
south_states_corr = {
"Rio Grande do Sul": df_rio["area_m2"].corr(df_rio["price_usd"]),
"Santa Catarina": df_santa["area_m2"].corr(df_santa["price_usd"]),
"Paraná": df_parana["area_m2"].corr(df_parana["price_usd"]),
}
# south_states_corr["Santa Catarina"] =df_south["area_m2"].corr(df_south["price_usd"])
# south_states_corr["Paraná"] =df_south["area_m2"].corr(df_south["price_usd"])
south_states_corr
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692923.ipynb
| null | null |
[{"Id": 129692923, "ScriptId": 38566575, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12372504, "CreationDate": "05/15/2023 19:47:49", "VersionNumber": 1.0, "Title": "Housing in Brazil", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 138.0, "LinesInsertedFromPrevious": 138.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import matplotlib.pyplot as plt
import plotly.express as px
df1 = pd.read_csv("/kaggle/input/brasil-real-estate/brasil-real-estate-1.csv")
df1.head()
df1.info()
df1.shape
df1.dropna(inplace=True)
df1.shape
df1[["lat", "lon"]] = df1["lat-lon"].str.split(",", expand=True).astype(float)
df1
df1["place_with_parent_names"].head()
df1["place_with_parent_names"].str.split("|", expand=True).head()
df1["state"] = df1["place_with_parent_names"].str.split("|", expand=True)[2]
df1
df1["price_usd"].head()
df1["price_usd"] = (
df1["price_usd"]
.str.replace("$", "", regex=False)
.str.replace(",", "")
.astype(float)
)
df1["area_m2"] = df1["area_m2"].str.replace("$", "", regex=False).astype(float)
df1.drop(columns=["lat-lon", "place_with_parent_names"], inplace=True)
df1.head()
df2 = pd.read_csv("/kaggle/input/brasil-real-estate/brasil-real-estate-2.csv")
df2.head()
df2.info()
df2["price_usd"] = df2["price_brl"] / (3.19)
df2["price_usd"]
df2.dropna(inplace=True)
df2.drop(columns="price_brl", inplace=True)
df2.head()
df = pd.concat([df1, df2])
print("df shape:", df.shape)
df.head()
fig = px.scatter_mapbox(
df,
lat="lat",
lon="lon",
center={"lat": -14.2, "lon": -51.9}, # Map will be centered on Brazil
width=600,
height=600,
hover_data=["price_usd"], # Display price when hovering mouse over house
)
fig.update_layout(mapbox_style="open-street-map")
fig.show()
summary_stats = df[["area_m2", "price_usd"]].describe()
summary_stats
# Build histogram
plt.hist(df["price_usd"])
# Label axes
plt.xlabel("Price [USD]")
plt.ylabel("Frequency")
# Add title
plt.title("Distribution of Home Prices")
# Build box plot
plt.boxplot(df["area_m2"], vert=False)
# Label x-axis
plt.xlabel("Area [sq meters]")
# Add title
plt.title("Distribution of Home Sizes")
mean_price_by_region = df.groupby("region")["price_usd"].mean().sort_values()
mean_price_by_region.head()
# Build bar chart, label axes, add title
mean_price_by_region.plot(
kind="bar",
xlabel="Region",
ylabel="Mean Price [USD]",
title="Mean Home Price by Region",
)
df_south = df[df["region"] == "South"]
df_south.head()
homes_by_state = df_south["state"].value_counts()
homes_by_state
# Subset data
df_south_rgs = df_south[df_south["state"] == "Rio Grande do Sul"]
# Build scatter plot
plt.scatter(x=df_south_rgs["area_m2"], y=df_south_rgs["price_usd"])
# Label axes
plt.xlabel("Area [sq meters]")
# plt.ylabel("Price [USD]")
# Add title
plt.title("Rio Grande do Sul: Price vs. Area")
df_rio = df_south[df_south["state"] == "Rio Grande do Sul"]
df_santa = df_south[df_south["state"] == "Santa Catarina"]
df_parana = df_south[df_south["state"] == "Paraná"]
rio_corr = df_rio["area_m2"].corr(df_rio["price_usd"])
santa_corr = df_santa["area_m2"].corr(df_santa["price_usd"])
parana_corr = df_parana["area_m2"].corr(df_parana["price_usd"])
south_states_corr = {
"Rio Grande do Sul": df_rio["area_m2"].corr(df_rio["price_usd"]),
"Santa Catarina": df_santa["area_m2"].corr(df_santa["price_usd"]),
"Paraná": df_parana["area_m2"].corr(df_parana["price_usd"]),
}
# south_states_corr["Santa Catarina"] =df_south["area_m2"].corr(df_south["price_usd"])
# south_states_corr["Paraná"] =df_south["area_m2"].corr(df_south["price_usd"])
south_states_corr
| false | 0 | 1,250 | 0 | 1,250 | 1,250 |
||
129692539
|
import pandas as pd
import numpy as np
import seaborn as sns
df = pd.read_excel("/kaggle/input/maas-tecrube/maas_tecrube.xlsx")
df
df.dtypes
df.shape
df.info()
# istatiksel değerleri görüntüleme
df.describe().T
sns.set(rc={"figure.figsize": (10, 7)})
sns.scatterplot(x="maas", y="tecrube", data=df)
df.corr()
sns.lineplot(x="maas", y="tecrube", data=df)
sns.regplot(x="maas", y="tecrube", data=df, ci=None)
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression()
x = df.drop("maas", axis=1)
y = df["maas"]
x
y
linear_regression.fit(x, y)
linear_regression.coef_
linear_regression.intercept_
katsayı = linear_regression.coef_
sabit = linear_regression.intercept_
def maasTahmin(tecrube):
maas = tecrube * katsayı + sabit
return maas
maasTahmin(5)
linear_regression.predict([[5]])
df
df["tahminMaas"] = linear_regression.predict(x)
df
df["maas_tahminMaas_fark"] = df["maas"] - df["tahminMaas"]
df
df["farkınKaresi"] = df["maas_tahminMaas_fark"] ** 2
df
df["farkınKaresi"].sum() / 14
from sklearn.metrics import mean_squared_error
MSE = mean_squared_error(df["maas"], df["tahminMaas"])
MSE
# RMSE
import math
RMSE = math.sqrt(MSE)
RMSE
from sklearn.metrics import mean_absolute_error
MAE = mean_absolute_error(df["maas"], df["tahminMaas"])
MAE
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/692/129692539.ipynb
| null | null |
[{"Id": 129692539, "ScriptId": 38567188, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14529873, "CreationDate": "05/15/2023 19:43:23", "VersionNumber": 1.0, "Title": "Regresyon: Maas-Tecrube", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 81.0, "LinesInsertedFromPrevious": 81.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import numpy as np
import seaborn as sns
df = pd.read_excel("/kaggle/input/maas-tecrube/maas_tecrube.xlsx")
df
df.dtypes
df.shape
df.info()
# istatiksel değerleri görüntüleme
df.describe().T
sns.set(rc={"figure.figsize": (10, 7)})
sns.scatterplot(x="maas", y="tecrube", data=df)
df.corr()
sns.lineplot(x="maas", y="tecrube", data=df)
sns.regplot(x="maas", y="tecrube", data=df, ci=None)
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression()
x = df.drop("maas", axis=1)
y = df["maas"]
x
y
linear_regression.fit(x, y)
linear_regression.coef_
linear_regression.intercept_
katsayı = linear_regression.coef_
sabit = linear_regression.intercept_
def maasTahmin(tecrube):
maas = tecrube * katsayı + sabit
return maas
maasTahmin(5)
linear_regression.predict([[5]])
df
df["tahminMaas"] = linear_regression.predict(x)
df
df["maas_tahminMaas_fark"] = df["maas"] - df["tahminMaas"]
df
df["farkınKaresi"] = df["maas_tahminMaas_fark"] ** 2
df
df["farkınKaresi"].sum() / 14
from sklearn.metrics import mean_squared_error
MSE = mean_squared_error(df["maas"], df["tahminMaas"])
MSE
# RMSE
import math
RMSE = math.sqrt(MSE)
RMSE
from sklearn.metrics import mean_absolute_error
MAE = mean_absolute_error(df["maas"], df["tahminMaas"])
MAE
| false | 0 | 508 | 0 | 508 | 508 |
||
129706448
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pickle
from scipy import stats
from scipy.stats import chi2_contingency
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import RobustScaler
import category_encoders as ce
from sklearn.feature_selection import RFECV
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import (
train_test_split,
cross_validate,
StratifiedKFold,
GridSearchCV,
cross_val_score,
RandomizedSearchCV,
)
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import (
RandomForestClassifier,
AdaBoostClassifier,
ExtraTreesClassifier,
GradientBoostingClassifier,
BaggingClassifier,
)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import (
roc_auc_score,
roc_curve,
RocCurveDisplay,
precision_recall_curve,
PrecisionRecallDisplay,
)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.metrics import (
confusion_matrix,
ConfusionMatrixDisplay,
classification_report,
)
from yellowbrick.classifier import DiscriminationThreshold
df = pd.read_csv("/kaggle/input/gamma-fest/train.csv")
df
df.hist(bins=10, figsize=(18, 15))
# ### DC206,DC207,DC208,DC209,DC210,DC211,DC212,DC216,DC226,DC220,DC230a,DC230b,DC232,DC232b,DC242,DC244,DC246,DC252,DC142a
df["DC205"].fillna(96, inplace=True)
df["DC213"].fillna(96, inplace=True)
df["DC214"].fillna(96, inplace=True)
df["DC215"].fillna(96, inplace=True)
df["DC235"].fillna(3, inplace=True)
df["DC237"].fillna(8, inplace=True)
df["DC237b"].fillna(8, inplace=True)
df["DC237c"].fillna(8, inplace=True)
df["DC237d"].fillna(8, inplace=True)
df["DC237e"].fillna(8, inplace=True)
df["DC237f"].fillna(8, inplace=True)
df["DC241"].fillna(6, inplace=True)
df["DC109"].fillna(96, inplace=True)
from scipy.stats import f_oneway
alpha = 0.05
# Initialize empty lists for feature names, KS statistics, and p-values
feature = []
pvalue = []
f_value = []
# Iterate over each feature in the sampled dataframe
for col in df.columns:
# Perform the Kolmogorov-Smirnov test
f, p = f_oneway(df[col], df["DC201"])
feature.append(col)
f_value.append(f"{f:.4f}")
pvalue.append(f"{p:.4f}")
# Create a new dataframe to store the KS statistics and p-values for each feature
df_anova = pd.DataFrame({"Feature": feature, "Fvalue": f_value, "Pvalue": pvalue})
df_anova.Pvalue = df_anova.Pvalue.astype("float")
# Print the results
# for idx, row in df_anova.iterrows():
# if float(row['Pvalue']) > alpha:
# print(f"Feature '{row['Feature']}' tidak memiliki hubungan dengan y (p-value = {row['Pvalue']})")
# else:
# print(f"Feature '{row['Feature']}' memiliki hubungan dengan y (p-value = {row['Pvalue']})")
print(df_anova)
df_anova[df_anova["Pvalue"] > 0.05]
df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
] = df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
].interpolate(
method="linear", limit_direction="forward", axis=0
)
df = df.interpolate(method="linear", limit_direction="forward", axis=0)
from sklearn.impute import KNNImputer
df["DC201"].replace(["Layak Minum", "Tidak Layak Minum"], [1, 0], inplace=True)
imputer = KNNImputer(n_neighbors=5)
df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
] = pd.DataFrame(
imputer.fit_transform(
df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
]
),
columns=df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
].columns,
)
df.isnull().sum()
df.isnull().sum()
df["DC201"].replace(["Layak Minum", "Tidak Layak Minum"], [1, 0], inplace=True)
# df['DC201'].replace([0.6],[0],inplace=True)
df.dropna(inplace=True)
df.DC201.value_counts()
df.dropna(inplace=True)
df.DC201.value_counts()
df_plus = df.corr()["DC201"].sort_values(ascending=False).reset_index()
df_plus = df_plus[df_plus["DC201"] > 0]
df_plus = np.array(df_plus["index"])
df_plus.DC201.value_counts()
X = df.drop(columns=["DC201", "id"])
y = df["DC201"]
print(X.shape)
print(y.shape)
from sklearn.preprocessing import RobustScaler
X = pd.DataFrame(RobustScaler().fit_transform(X))
X = np.log10(X)
X
from sklearn.model_selection import train_test_split
# Separate train and test set for modelling
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# Train and test set dimension
print("Shape of X_train", X_train.shape)
print("Shape of y_train", y_train.shape)
print("Shape of X_test", X_test.shape)
print("Shape of y_test", y_test.shape)
X_train_over, y_train_over = SMOTE().fit_resample(X_train, y_train)
pd.Series(y_train_over).value_counts()
# Model assignment
dtc = DecisionTreeClassifier()
rfc = RandomForestClassifier()
abc = AdaBoostClassifier()
etc = ExtraTreesClassifier()
gbc = GradientBoostingClassifier()
bgc = BaggingClassifier()
knn = KNeighborsClassifier()
logreg = LogisticRegression()
nb = GaussianNB()
svc = SVC()
xgb = XGBClassifier(eval_metric="error")
mlp = MLPClassifier()
# Assign model to a list
models = [dtc, rfc, abc, etc, gbc, bgc, knn, logreg, nb, svc, xgb, mlp]
model_name = []
# Get Classifier names for every model
for name in models:
names = str(type(name)).split(".")[-1][:-2]
# Append classifier names to model_name list
model_name.append(names)
skfold = StratifiedKFold(n_splits=5)
# Cross validation for each model
dtc_score = cross_validate(
models[0],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
rfc_score = cross_validate(
models[1],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
abc_score = cross_validate(
models[2],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
etc_score = cross_validate(
models[3],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
gbc_score = cross_validate(
models[4],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
bgc_score = cross_validate(
models[5],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
knn_score = cross_validate(
models[6],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
logreg_score = cross_validate(
models[7],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
nb_score = cross_validate(
models[8],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
svm_score = cross_validate(
models[9],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
xgb_score = cross_validate(
models[10],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
mlp_score = cross_validate(
models[11],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
cv_result = [
dtc_score,
rfc_score,
abc_score,
etc_score,
gbc_score,
bgc_score,
knn_score,
logreg_score,
nb_score,
svm_score,
xgb_score,
mlp_score,
]
# Average score for each metrics
df_cv_result = pd.DataFrame(cv_result, index=model_name).applymap(np.mean)
df_cv_result = df_cv_result.sort_values(
["test_accuracy", "test_recall"], ascending=False
)
df_cv_result = df_cv_result.reset_index()
df_cv_result.rename(columns={"index": "Model"}, inplace=True)
df_cv_result
import tensorflow as tf
from tensorflow.keras import regularizers
# Define the model
model2 = tf.keras.Sequential()
model2.add(
tf.keras.layers.Dense(
units=512,
activation="relu",
input_shape=(38,),
kernel_regularizer=regularizers.l2(0.01),
)
)
model2.add(tf.keras.layers.Dropout(0.1))
model2.add(
tf.keras.layers.Dense(
units=512, activation="relu", kernel_regularizer=regularizers.l2(0.01)
)
)
model2.add(tf.keras.layers.Dropout(0.2))
model2.add(
tf.keras.layers.Dense(
units=256, activation="relu", kernel_regularizer=regularizers.l2(0.01)
)
)
model2.add(tf.keras.layers.Dropout(0.3))
model2.add(
tf.keras.layers.Dense(
units=256, activation="relu", kernel_regularizer=regularizers.l2(0.01)
)
)
model2.add(tf.keras.layers.Dropout(0.4))
model2.add(tf.keras.layers.Dense(units=1, activation="linear"))
model2.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=["accuracy"],
)
# Membuat callback
callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1)
# Melatih model
history = model.fit(
X,
y,
epochs=300,
batch_size=32,
validation_split=0.2,
callbacks=[callback],
validation_data=(X_val, y_val),
)
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
print(f"Test loss: {test_loss}, Test accuracy: {test_acc}")
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("Model MSE")
plt.ylabel("MSE")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Plot the training and validation loss
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.title("Model loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Assuming you have trained and obtained predictions from your model
y_pred = model2.predict(X_test)
y_pred_classes = np.argmax(
y_pred, axis=1
) # Convert predicted probabilities to class labels
# Calculate F1 score
f1 = f1_score(y_test, y_pred_classes, average="macro")
print("F1 score:", f1)
# Define the CNN model architecture
import tensorflow as tf
# Define the complex model architecture
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(256, activation="relu", input_shape=(38,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(38, activation="softmax"),
]
)
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=["accuracy"],
)
# Membuat callback
callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1)
# Melatih model
history = model.fit(
X,
y,
epochs=300,
batch_size=32,
validation_split=0.2,
callbacks=[callback],
validation_data=(X_val, y_val),
)
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
print(f"Test loss: {test_loss}, Test accuracy: {test_acc}")
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("Model MSE")
plt.ylabel("MSE")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Plot the training and validation loss
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.title("Model loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Assuming you have trained and obtained predictions from your model
y_pred = model.predict(X_test)
y_pred_classes = np.argmax(
y_pred, axis=1
) # Convert predicted probabilities to class labels
# Calculate F1 score
f1 = f1_score(y_test, y_pred_classes, average="macro")
print("F1 score:", f1)
# Cross validation for each model
dtc_score = cross_val_score(models[0], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
rfc_score = cross_val_score(models[1], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
abc_score = cross_val_score(models[2], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
etc_score = cross_val_score(models[3], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
gbc_score = cross_val_score(models[4], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
bgc_score = cross_val_score(models[5], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
knn_score = cross_val_score(models[6], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
logreg_score = cross_val_score(
models[7], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1
)
nb_score = cross_val_score(models[8], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
svm_score = cross_val_score(models[9], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
xgb_score = cross_val_score(models[10], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
mlp_score = cross_val_score(models[11], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
# List of score per model
cv_result = [
dtc_score,
rfc_score,
abc_score,
etc_score,
gbc_score,
bgc_score,
knn_score,
logreg_score,
nb_score,
svm_score,
xgb_score,
mlp_score,
]
# Create dataframe for score every k-fold
df_cv_result = pd.DataFrame(cv_result, index=model_name)
df_cv_result
# Calculate average for every k-fold validation
cv_mean = []
i = 0
for mean in cv_result:
mean = cv_result[i].mean()
cv_mean.append(mean)
i += 1
# Calculate standard deviation for every k-fold validation
cv_std = []
i = 0
for std in cv_result:
std = cv_result[i].std()
cv_std.append(std)
i += 1
# Average and standard deviation score for each model
df_cv = pd.DataFrame(
{"score_mean": cv_mean, "score_std": cv_std}, index=model_name
).sort_values(["score_mean", "score_std"], ascending=[False, True])
df_cv
# Create a list to assign a model score
train_score = []
test_score = []
f1_hasil = []
# Create dataframe
df_train_test = pd.DataFrame()
for i in models:
# Fit each model
model = i.fit(X_train, y_train)
# accuracy for training set
train_score.append(model.score(X_train, y_train))
# accuracy for testing set
test_score.append(model.score(X_test, y_test))
y_pred = model.predict(X_test)
f1_hasil.append(f1_score(y_test, y_pred))
# Create a dataframe to store accuracy score
df_avg_score = pd.DataFrame(
{"train score": train_score, "test score": test_score, "f1 score": f1_hasil},
index=model_name,
)
# Create a new column for the difference in accuracy score
df_avg_score["difference"] = abs(
df_avg_score["train score"] - df_avg_score["test score"]
)
# Sort accuracy by smallest difference
df_avg_score = df_avg_score.sort_values(
["train score", "test score", "difference"], ascending=[False, False, True]
)
df_avg_score
positive_records = y_train.sum()
negative_records = len(y_train) - positive_records
spw = negative_records / positive_records
from mlxtend.classifier import StackingClassifier
from sklearn.svm import SVC
xgb = RandomForestClassifier()
xgb2 = RandomForestClassifier(
bootstrap=True,
max_depth=100,
max_features=3,
min_samples_leaf=3,
min_samples_split=8,
n_estimators=200,
)
rfc1 = RandomForestClassifier()
rfc2 = RandomForestClassifier()
rfc3 = RandomForestClassifier()
clf2 = StackingClassifier(classifiers=[xgb, xgb2], meta_classifier=gbc, use_probas=True)
clf2.fit(X_train_over, y_train_over)
# predict test set
y_pred_def = clf2.predict(X_test)
# Calculate accuracy, precision, recall, and f1-score
train_score_def = clf2.score(X_train_over, y_train_over) * 100
test_score_def = clf2.score(X_test, y_test) * 100
prec_score_def = precision_score(y_test, y_pred_def) * 100
recall_score_def = recall_score(y_test, y_pred_def) * 100
f1_def = f1_score(y_test, y_pred_def) * 100
print("Training Accuracy : {}".format(train_score_def))
print("Test Accuracy : {}".format(test_score_def))
print("Precision Score : {}".format(prec_score_def))
print("Recall Score : {}".format(recall_score_def))
print("F1 Score : {}".format(f1_def))
from bayes_opt import BayesianOptimization
def stratified_kfold_score(clf, X, y, n_fold):
X, y = X.values, y.values
strat_kfold = StratifiedKFold(n_splits=n_fold, shuffle=True, random_state=1)
accuracy_list = []
for train_index, test_index in strat_kfold.split(X, y):
x_train_fold, x_test_fold = X[train_index], X[test_index]
y_train_fold, y_test_fold = y[train_index], y[test_index]
clf.fit(x_train_fold, y_train_fold)
preds = clf.predict(x_test_fold)
accuracy_test = accuracy_score(preds, y_test_fold)
accuracy_list.append(accuracy_test)
return np.array(accuracy_list).mean()
def bo_params_rf(max_samples, n_estimators, max_features):
params = {
"max_samples": max_samples,
"max_features": max_features,
"n_estimators": int(n_estimators),
}
clf = RandomForestClassifier(
max_samples=params["max_samples"],
max_features=params["max_features"],
n_estimators=params["n_estimators"],
)
score = stratified_kfold_score(clf, X_train, y_train, 5)
return score
rf_bo = BayesianOptimization(
bo_params_rf,
{"max_samples": (0.5, 1), "max_features": (0.5, 1), "n_estimators": (100, 200)},
)
results = rf_bo.maximize(n_iter=50, init_points=20, acq="ei")
model2.fit(X_train, y_train)
# predict test set
y_pred_def = model2.predict(X_test)
# Calculate accuracy, precision, recall, and f1-score
# train_score_def =model2.score(X_train, y_train)*100
# test_score_def = model2.score(X_test, y_test) *100
prec_score_def = precision_score(y_test, y_pred_def) * 100
recall_score_def = recall_score(y_test, y_pred_def) * 100
f1_def = f1_score(y_test, y_pred_def) * 100
print("Training Accuracy : {}".format(train_score_def))
print("Test Accuracy : {}".format(test_score_def))
print("Precision Score : {}".format(prec_score_def))
print("Recall Score : {}".format(recall_score_def))
print("F1 Score : {}".format(f1_def))
xgb.fit(X_train, y_train)
# predict test set
y_pred_def = xgb.predict(X_test)
# Calculate accuracy, precision, recall, and f1-score
train_score_def = xgb.score(X_train, y_train) * 100
test_score_def = xgb.score(X_test, y_test) * 100
prec_score_def = precision_score(y_test, y_pred_def) * 100
recall_score_def = recall_score(y_test, y_pred_def) * 100
f1_def = f1_score(y_test, y_pred_def) * 100
print("Training Accuracy : {}".format(train_score_def))
print("Test Accuracy : {}".format(test_score_def))
print("Precision Score : {}".format(prec_score_def))
print("Recall Score : {}".format(recall_score_def))
print("F1 Score : {}".format(f1_def))
df1 = pd.read_csv("/kaggle/input/gamma-fest/test.csv")
df1.head()
df1 = df1.drop(columns=["id"])
df1 = pd.DataFrame(RobustScaler().fit_transform(df1))
df1
final = clf2.predict(df1)
submission = pd.read_csv("/kaggle/input/gamma-fest/sample_submission.csv")
submission["DC201"] = final
submission
submission["DC201"].replace([1, 0], ["Layak Minum", "Tidak Layak Minum"], inplace=True)
submission
submission.DC201.value_counts()
submission.to_csv("submission3.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/706/129706448.ipynb
| null | null |
[{"Id": 129706448, "ScriptId": 38456042, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8002178, "CreationDate": "05/15/2023 23:02:08", "VersionNumber": 1.0, "Title": "notebookf8219d6beb", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 493.0, "LinesInsertedFromPrevious": 493.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pickle
from scipy import stats
from scipy.stats import chi2_contingency
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import RobustScaler
import category_encoders as ce
from sklearn.feature_selection import RFECV
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import (
train_test_split,
cross_validate,
StratifiedKFold,
GridSearchCV,
cross_val_score,
RandomizedSearchCV,
)
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import (
RandomForestClassifier,
AdaBoostClassifier,
ExtraTreesClassifier,
GradientBoostingClassifier,
BaggingClassifier,
)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import (
roc_auc_score,
roc_curve,
RocCurveDisplay,
precision_recall_curve,
PrecisionRecallDisplay,
)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.metrics import (
confusion_matrix,
ConfusionMatrixDisplay,
classification_report,
)
from yellowbrick.classifier import DiscriminationThreshold
df = pd.read_csv("/kaggle/input/gamma-fest/train.csv")
df
df.hist(bins=10, figsize=(18, 15))
# ### DC206,DC207,DC208,DC209,DC210,DC211,DC212,DC216,DC226,DC220,DC230a,DC230b,DC232,DC232b,DC242,DC244,DC246,DC252,DC142a
df["DC205"].fillna(96, inplace=True)
df["DC213"].fillna(96, inplace=True)
df["DC214"].fillna(96, inplace=True)
df["DC215"].fillna(96, inplace=True)
df["DC235"].fillna(3, inplace=True)
df["DC237"].fillna(8, inplace=True)
df["DC237b"].fillna(8, inplace=True)
df["DC237c"].fillna(8, inplace=True)
df["DC237d"].fillna(8, inplace=True)
df["DC237e"].fillna(8, inplace=True)
df["DC237f"].fillna(8, inplace=True)
df["DC241"].fillna(6, inplace=True)
df["DC109"].fillna(96, inplace=True)
from scipy.stats import f_oneway
alpha = 0.05
# Initialize empty lists for feature names, KS statistics, and p-values
feature = []
pvalue = []
f_value = []
# Iterate over each feature in the sampled dataframe
for col in df.columns:
# Perform the Kolmogorov-Smirnov test
f, p = f_oneway(df[col], df["DC201"])
feature.append(col)
f_value.append(f"{f:.4f}")
pvalue.append(f"{p:.4f}")
# Create a new dataframe to store the KS statistics and p-values for each feature
df_anova = pd.DataFrame({"Feature": feature, "Fvalue": f_value, "Pvalue": pvalue})
df_anova.Pvalue = df_anova.Pvalue.astype("float")
# Print the results
# for idx, row in df_anova.iterrows():
# if float(row['Pvalue']) > alpha:
# print(f"Feature '{row['Feature']}' tidak memiliki hubungan dengan y (p-value = {row['Pvalue']})")
# else:
# print(f"Feature '{row['Feature']}' memiliki hubungan dengan y (p-value = {row['Pvalue']})")
print(df_anova)
df_anova[df_anova["Pvalue"] > 0.05]
df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
] = df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
].interpolate(
method="linear", limit_direction="forward", axis=0
)
df = df.interpolate(method="linear", limit_direction="forward", axis=0)
from sklearn.impute import KNNImputer
df["DC201"].replace(["Layak Minum", "Tidak Layak Minum"], [1, 0], inplace=True)
imputer = KNNImputer(n_neighbors=5)
df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
] = pd.DataFrame(
imputer.fit_transform(
df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
]
),
columns=df[
[
"DC206",
"DC237a",
"DC207",
"DC208",
"DC209",
"DC210",
"DC211",
"DC212",
"DC216",
"DC226",
"DC220",
"DC230a",
"DC230b",
"DC232",
"DC232b",
"DC242",
"DC244",
"DC246",
"DC252",
"DC142a",
]
].columns,
)
df.isnull().sum()
df.isnull().sum()
df["DC201"].replace(["Layak Minum", "Tidak Layak Minum"], [1, 0], inplace=True)
# df['DC201'].replace([0.6],[0],inplace=True)
df.dropna(inplace=True)
df.DC201.value_counts()
df.dropna(inplace=True)
df.DC201.value_counts()
df_plus = df.corr()["DC201"].sort_values(ascending=False).reset_index()
df_plus = df_plus[df_plus["DC201"] > 0]
df_plus = np.array(df_plus["index"])
df_plus.DC201.value_counts()
X = df.drop(columns=["DC201", "id"])
y = df["DC201"]
print(X.shape)
print(y.shape)
from sklearn.preprocessing import RobustScaler
X = pd.DataFrame(RobustScaler().fit_transform(X))
X = np.log10(X)
X
from sklearn.model_selection import train_test_split
# Separate train and test set for modelling
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# Train and test set dimension
print("Shape of X_train", X_train.shape)
print("Shape of y_train", y_train.shape)
print("Shape of X_test", X_test.shape)
print("Shape of y_test", y_test.shape)
X_train_over, y_train_over = SMOTE().fit_resample(X_train, y_train)
pd.Series(y_train_over).value_counts()
# Model assignment
dtc = DecisionTreeClassifier()
rfc = RandomForestClassifier()
abc = AdaBoostClassifier()
etc = ExtraTreesClassifier()
gbc = GradientBoostingClassifier()
bgc = BaggingClassifier()
knn = KNeighborsClassifier()
logreg = LogisticRegression()
nb = GaussianNB()
svc = SVC()
xgb = XGBClassifier(eval_metric="error")
mlp = MLPClassifier()
# Assign model to a list
models = [dtc, rfc, abc, etc, gbc, bgc, knn, logreg, nb, svc, xgb, mlp]
model_name = []
# Get Classifier names for every model
for name in models:
names = str(type(name)).split(".")[-1][:-2]
# Append classifier names to model_name list
model_name.append(names)
skfold = StratifiedKFold(n_splits=5)
# Cross validation for each model
dtc_score = cross_validate(
models[0],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
rfc_score = cross_validate(
models[1],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
abc_score = cross_validate(
models[2],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
etc_score = cross_validate(
models[3],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
gbc_score = cross_validate(
models[4],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
bgc_score = cross_validate(
models[5],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
knn_score = cross_validate(
models[6],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
logreg_score = cross_validate(
models[7],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
nb_score = cross_validate(
models[8],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
svm_score = cross_validate(
models[9],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
xgb_score = cross_validate(
models[10],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
mlp_score = cross_validate(
models[11],
X,
y,
scoring=("accuracy", "precision", "recall", "f1"),
cv=skfold,
n_jobs=-1,
verbose=1,
)
cv_result = [
dtc_score,
rfc_score,
abc_score,
etc_score,
gbc_score,
bgc_score,
knn_score,
logreg_score,
nb_score,
svm_score,
xgb_score,
mlp_score,
]
# Average score for each metrics
df_cv_result = pd.DataFrame(cv_result, index=model_name).applymap(np.mean)
df_cv_result = df_cv_result.sort_values(
["test_accuracy", "test_recall"], ascending=False
)
df_cv_result = df_cv_result.reset_index()
df_cv_result.rename(columns={"index": "Model"}, inplace=True)
df_cv_result
import tensorflow as tf
from tensorflow.keras import regularizers
# Define the model
model2 = tf.keras.Sequential()
model2.add(
tf.keras.layers.Dense(
units=512,
activation="relu",
input_shape=(38,),
kernel_regularizer=regularizers.l2(0.01),
)
)
model2.add(tf.keras.layers.Dropout(0.1))
model2.add(
tf.keras.layers.Dense(
units=512, activation="relu", kernel_regularizer=regularizers.l2(0.01)
)
)
model2.add(tf.keras.layers.Dropout(0.2))
model2.add(
tf.keras.layers.Dense(
units=256, activation="relu", kernel_regularizer=regularizers.l2(0.01)
)
)
model2.add(tf.keras.layers.Dropout(0.3))
model2.add(
tf.keras.layers.Dense(
units=256, activation="relu", kernel_regularizer=regularizers.l2(0.01)
)
)
model2.add(tf.keras.layers.Dropout(0.4))
model2.add(tf.keras.layers.Dense(units=1, activation="linear"))
model2.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=["accuracy"],
)
# Membuat callback
callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1)
# Melatih model
history = model.fit(
X,
y,
epochs=300,
batch_size=32,
validation_split=0.2,
callbacks=[callback],
validation_data=(X_val, y_val),
)
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
print(f"Test loss: {test_loss}, Test accuracy: {test_acc}")
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("Model MSE")
plt.ylabel("MSE")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Plot the training and validation loss
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.title("Model loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Assuming you have trained and obtained predictions from your model
y_pred = model2.predict(X_test)
y_pred_classes = np.argmax(
y_pred, axis=1
) # Convert predicted probabilities to class labels
# Calculate F1 score
f1 = f1_score(y_test, y_pred_classes, average="macro")
print("F1 score:", f1)
# Define the CNN model architecture
import tensorflow as tf
# Define the complex model architecture
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(256, activation="relu", input_shape=(38,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(38, activation="softmax"),
]
)
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=["accuracy"],
)
# Membuat callback
callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1)
# Melatih model
history = model.fit(
X,
y,
epochs=300,
batch_size=32,
validation_split=0.2,
callbacks=[callback],
validation_data=(X_val, y_val),
)
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
print(f"Test loss: {test_loss}, Test accuracy: {test_acc}")
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("Model MSE")
plt.ylabel("MSE")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Plot the training and validation loss
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.title("Model loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
# Assuming you have trained and obtained predictions from your model
y_pred = model.predict(X_test)
y_pred_classes = np.argmax(
y_pred, axis=1
) # Convert predicted probabilities to class labels
# Calculate F1 score
f1 = f1_score(y_test, y_pred_classes, average="macro")
print("F1 score:", f1)
# Cross validation for each model
dtc_score = cross_val_score(models[0], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
rfc_score = cross_val_score(models[1], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
abc_score = cross_val_score(models[2], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
etc_score = cross_val_score(models[3], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
gbc_score = cross_val_score(models[4], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
bgc_score = cross_val_score(models[5], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
knn_score = cross_val_score(models[6], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
logreg_score = cross_val_score(
models[7], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1
)
nb_score = cross_val_score(models[8], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
svm_score = cross_val_score(models[9], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
xgb_score = cross_val_score(models[10], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
mlp_score = cross_val_score(models[11], X, y, scoring="f1", cv=5, n_jobs=-1, verbose=1)
# List of score per model
cv_result = [
dtc_score,
rfc_score,
abc_score,
etc_score,
gbc_score,
bgc_score,
knn_score,
logreg_score,
nb_score,
svm_score,
xgb_score,
mlp_score,
]
# Create dataframe for score every k-fold
df_cv_result = pd.DataFrame(cv_result, index=model_name)
df_cv_result
# Calculate average for every k-fold validation
cv_mean = []
i = 0
for mean in cv_result:
mean = cv_result[i].mean()
cv_mean.append(mean)
i += 1
# Calculate standard deviation for every k-fold validation
cv_std = []
i = 0
for std in cv_result:
std = cv_result[i].std()
cv_std.append(std)
i += 1
# Average and standard deviation score for each model
df_cv = pd.DataFrame(
{"score_mean": cv_mean, "score_std": cv_std}, index=model_name
).sort_values(["score_mean", "score_std"], ascending=[False, True])
df_cv
# Create a list to assign a model score
train_score = []
test_score = []
f1_hasil = []
# Create dataframe
df_train_test = pd.DataFrame()
for i in models:
# Fit each model
model = i.fit(X_train, y_train)
# accuracy for training set
train_score.append(model.score(X_train, y_train))
# accuracy for testing set
test_score.append(model.score(X_test, y_test))
y_pred = model.predict(X_test)
f1_hasil.append(f1_score(y_test, y_pred))
# Create a dataframe to store accuracy score
df_avg_score = pd.DataFrame(
{"train score": train_score, "test score": test_score, "f1 score": f1_hasil},
index=model_name,
)
# Create a new column for the difference in accuracy score
df_avg_score["difference"] = abs(
df_avg_score["train score"] - df_avg_score["test score"]
)
# Sort accuracy by smallest difference
df_avg_score = df_avg_score.sort_values(
["train score", "test score", "difference"], ascending=[False, False, True]
)
df_avg_score
positive_records = y_train.sum()
negative_records = len(y_train) - positive_records
spw = negative_records / positive_records
from mlxtend.classifier import StackingClassifier
from sklearn.svm import SVC
xgb = RandomForestClassifier()
xgb2 = RandomForestClassifier(
bootstrap=True,
max_depth=100,
max_features=3,
min_samples_leaf=3,
min_samples_split=8,
n_estimators=200,
)
rfc1 = RandomForestClassifier()
rfc2 = RandomForestClassifier()
rfc3 = RandomForestClassifier()
clf2 = StackingClassifier(classifiers=[xgb, xgb2], meta_classifier=gbc, use_probas=True)
clf2.fit(X_train_over, y_train_over)
# predict test set
y_pred_def = clf2.predict(X_test)
# Calculate accuracy, precision, recall, and f1-score
train_score_def = clf2.score(X_train_over, y_train_over) * 100
test_score_def = clf2.score(X_test, y_test) * 100
prec_score_def = precision_score(y_test, y_pred_def) * 100
recall_score_def = recall_score(y_test, y_pred_def) * 100
f1_def = f1_score(y_test, y_pred_def) * 100
print("Training Accuracy : {}".format(train_score_def))
print("Test Accuracy : {}".format(test_score_def))
print("Precision Score : {}".format(prec_score_def))
print("Recall Score : {}".format(recall_score_def))
print("F1 Score : {}".format(f1_def))
from bayes_opt import BayesianOptimization
def stratified_kfold_score(clf, X, y, n_fold):
X, y = X.values, y.values
strat_kfold = StratifiedKFold(n_splits=n_fold, shuffle=True, random_state=1)
accuracy_list = []
for train_index, test_index in strat_kfold.split(X, y):
x_train_fold, x_test_fold = X[train_index], X[test_index]
y_train_fold, y_test_fold = y[train_index], y[test_index]
clf.fit(x_train_fold, y_train_fold)
preds = clf.predict(x_test_fold)
accuracy_test = accuracy_score(preds, y_test_fold)
accuracy_list.append(accuracy_test)
return np.array(accuracy_list).mean()
def bo_params_rf(max_samples, n_estimators, max_features):
params = {
"max_samples": max_samples,
"max_features": max_features,
"n_estimators": int(n_estimators),
}
clf = RandomForestClassifier(
max_samples=params["max_samples"],
max_features=params["max_features"],
n_estimators=params["n_estimators"],
)
score = stratified_kfold_score(clf, X_train, y_train, 5)
return score
rf_bo = BayesianOptimization(
bo_params_rf,
{"max_samples": (0.5, 1), "max_features": (0.5, 1), "n_estimators": (100, 200)},
)
results = rf_bo.maximize(n_iter=50, init_points=20, acq="ei")
model2.fit(X_train, y_train)
# predict test set
y_pred_def = model2.predict(X_test)
# Calculate accuracy, precision, recall, and f1-score
# train_score_def =model2.score(X_train, y_train)*100
# test_score_def = model2.score(X_test, y_test) *100
prec_score_def = precision_score(y_test, y_pred_def) * 100
recall_score_def = recall_score(y_test, y_pred_def) * 100
f1_def = f1_score(y_test, y_pred_def) * 100
print("Training Accuracy : {}".format(train_score_def))
print("Test Accuracy : {}".format(test_score_def))
print("Precision Score : {}".format(prec_score_def))
print("Recall Score : {}".format(recall_score_def))
print("F1 Score : {}".format(f1_def))
xgb.fit(X_train, y_train)
# predict test set
y_pred_def = xgb.predict(X_test)
# Calculate accuracy, precision, recall, and f1-score
train_score_def = xgb.score(X_train, y_train) * 100
test_score_def = xgb.score(X_test, y_test) * 100
prec_score_def = precision_score(y_test, y_pred_def) * 100
recall_score_def = recall_score(y_test, y_pred_def) * 100
f1_def = f1_score(y_test, y_pred_def) * 100
print("Training Accuracy : {}".format(train_score_def))
print("Test Accuracy : {}".format(test_score_def))
print("Precision Score : {}".format(prec_score_def))
print("Recall Score : {}".format(recall_score_def))
print("F1 Score : {}".format(f1_def))
df1 = pd.read_csv("/kaggle/input/gamma-fest/test.csv")
df1.head()
df1 = df1.drop(columns=["id"])
df1 = pd.DataFrame(RobustScaler().fit_transform(df1))
df1
final = clf2.predict(df1)
submission = pd.read_csv("/kaggle/input/gamma-fest/sample_submission.csv")
submission["DC201"] = final
submission
submission["DC201"].replace([1, 0], ["Layak Minum", "Tidak Layak Minum"], inplace=True)
submission
submission.DC201.value_counts()
submission.to_csv("submission3.csv", index=False)
| false | 0 | 7,491 | 0 | 7,491 | 7,491 |
||
129706647
|
<jupyter_start><jupyter_text>Preprocessed FOG Dataset
Kaggle dataset identifier: fog-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("/kaggle/input/fog-dataset/fog_dataset.csv")
data
# Выделяем X и y
X = data[["AccV", "AccML", "AccAP"]]
y = data[["StartHesitation", "Turn", "Walking"]]
from sklearn.model_selection import train_test_split
# Train/Test д
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# генератор батчей
def batch(x, y, batchsize=20000):
l = len(x)
for ndx in range(0, l, batchsize):
yield x[ndx : min(ndx + batchsize, l)], y[ndx : min(ndx + batchsize, l)]
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import average_precision_score
clf_StartHes = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X_train, y_train["StartHesitation"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["StartHesitation"], y_predicted))
clf_Turn = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X_train, y_train["Turn"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["Turn"], y_predicted))
clf_Walk = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X_train, y_train["Walking"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Walk.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Walk.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["Walking"], y_predicted))
# --------------------------------------------------------------------------------------
clf_Turn = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X, y["Turn"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X)[:, 1]
print(average_precision_score(y["Turn"], y_predicted))
# ----------------------------------------------------------------------------------------
clf_Turn = SGDClassifier(alpha=0.001, loss="log_loss", n_jobs=-1, shuffle=True)
ROUNDS = 10
for _ in range(ROUNDS):
batch_generator = batch(X_train, y_train["Turn"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["Turn"], y_predicted))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/706/129706647.ipynb
|
fog-dataset
|
aerikg
|
[{"Id": 129706647, "ScriptId": 38519248, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6171471, "CreationDate": "05/15/2023 23:06:07", "VersionNumber": 3.0, "Title": "notebook1127797ef2", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 86.0, "LinesInsertedFromPrevious": 54.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 32.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186039101, "KernelVersionId": 129706647, "SourceDatasetVersionId": 5573463}]
|
[{"Id": 5573463, "DatasetId": 3168620, "DatasourceVersionId": 5648287, "CreatorUserId": 12406707, "LicenseName": "Unknown", "CreationDate": "05/01/2023 11:15:51", "VersionNumber": 4.0, "Title": "Preprocessed FOG Dataset", "Slug": "fog-dataset", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Data Update 2023-05-01", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3168620, "CreatorUserId": 12406707, "OwnerUserId": 12406707.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5573463.0, "CurrentDatasourceVersionId": 5648287.0, "ForumId": 3232837, "Type": 2, "CreationDate": "04/22/2023 19:25:46", "LastActivityDate": "04/22/2023", "TotalViews": 176, "TotalDownloads": 19, "TotalVotes": 0, "TotalKernels": 4}]
|
[{"Id": 12406707, "UserName": "aerikg", "DisplayName": "\u042d\u0440\u0438\u043a \u0410\u0431\u0434\u0443\u0440\u0430\u0445\u043c\u0430\u043d\u043e\u0432", "RegisterDate": "11/14/2022", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("/kaggle/input/fog-dataset/fog_dataset.csv")
data
# Выделяем X и y
X = data[["AccV", "AccML", "AccAP"]]
y = data[["StartHesitation", "Turn", "Walking"]]
from sklearn.model_selection import train_test_split
# Train/Test д
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# генератор батчей
def batch(x, y, batchsize=20000):
l = len(x)
for ndx in range(0, l, batchsize):
yield x[ndx : min(ndx + batchsize, l)], y[ndx : min(ndx + batchsize, l)]
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import average_precision_score
clf_StartHes = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X_train, y_train["StartHesitation"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["StartHesitation"], y_predicted))
clf_Turn = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X_train, y_train["Turn"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["Turn"], y_predicted))
clf_Walk = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X_train, y_train["Walking"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Walk.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Walk.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["Walking"], y_predicted))
# --------------------------------------------------------------------------------------
clf_Turn = SGDClassifier(loss="log_loss", n_jobs=-1, shuffle=True)
batch_generator = batch(X, y["Turn"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X)[:, 1]
print(average_precision_score(y["Turn"], y_predicted))
# ----------------------------------------------------------------------------------------
clf_Turn = SGDClassifier(alpha=0.001, loss="log_loss", n_jobs=-1, shuffle=True)
ROUNDS = 10
for _ in range(ROUNDS):
batch_generator = batch(X_train, y_train["Turn"])
for index, (batch_X, batch_y) in enumerate(batch_generator):
clf_Turn.partial_fit(batch_X, batch_y, classes=[0, 1])
y_predicted = clf_Turn.predict_proba(X_test)[:, 1]
print(average_precision_score(y_test["Turn"], y_predicted))
| false | 1 | 1,088 | 0 | 1,110 | 1,088 |
||
129771153
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn import tree
from sklearn.model_selection import train_test_split # Import train_test_split function
from sklearn import metrics
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Loading Data & Feature Extraction
def read_train_data(instead_read_test_data=False):
import pandas as pd
# specifying datatypes and reducing some datatypes to avoid
# memory error
dtypes = {
"elapsed_time": np.int32,
"event_name": "category",
"name": "category",
"level": np.uint8,
"room_coor_x": np.float32,
"room_coor_y": np.float32,
"screen_coor_x": np.float32,
"screen_coor_y": np.float32,
"hover_duration": np.float32,
"text": "category",
"fqid": "category",
"room_fqid": "category",
"text_fqid": "category",
"fullscreen": np.int32,
"hq": np.int32,
"music": np.int32,
"level_group": "category",
}
if instead_read_test_data == True:
print("reading_test_Data file")
df = pd.read_csv(
"/kaggle/input/student-performance-and-game-play/test.csv", dtype=dtypes
)
else:
print("Loading train data")
df = pd.read_csv(
"/kaggle/input/student-performance-and-game-play/train.csv", dtype=dtypes
)
print("Train data Loaded")
return df
def read_train_data_labels():
import pandas as pd
print("Loading train labels")
df = pd.read_csv("/kaggle/input/student-performance-and-game-play/train_labels.csv")
print("Training Labels Loaded")
return df
def aggregating_merging_numeric_and_cat_train_data(
get_all_features=True, instead_read_test_data=False
):
"""
Train data contains data in different formats some are numeric while some
are objects and text fields. The session_id is unique to an individual and the
group_level, however many rows have same session_ids as a single user session can span over
some time creating lots of data. For performing analyses we must create features pertaining
each unique session_id.
"""
df = read_train_data(instead_read_test_data=instead_read_test_data)
# extracting columns pertaining to numeric data
numeric_columns = df.select_dtypes(include=["int64", "float64"]).columns
# grouping by session id and level group as questions are asked at the end of certain levels
# aggregating to create new features per session id
print("Extracting features from numerical data")
if get_all_features == False:
grouped = df.groupby(["session_id", "level_group"])[numeric_columns[2:]].agg(
# getting the count of entries in each group
count=("elapsed_time", "count"),
# Get max of the elapsed time column for each group
max_elapsed_time=("elapsed_time", "max"),
# Get min of the elapsed time column for each group
min_elapsed_time=("elapsed_time", "min"),
# Get min of the elapsed time column for each group
mean_elapsed_time=("elapsed_time", "mean"),
# Get min of the level column for each group
min_level=("level", "min"),
# Get min of the level column for each group
max_level=("level", "max"),
# Get min of the level column for each group
mean_level=("level", "mean"),
)
else:
grouped_num = df.groupby(["session_id", "level_group"])[
numeric_columns[2:]
].agg(["min", "max", "sum", "mean", "var"])
grouped_num.columns = ["_".join(x) for x in grouped_num.columns.ravel()]
print("Extracted features from numerical data")
# aggregating categorical features
print("Extracting features from Categorical data")
non_numeric_columns = df.select_dtypes(include=["category"]).columns
grouped_cat = df.groupby(["session_id", "level_group"])[non_numeric_columns].agg(
["unique", "nunique"]
)
grouped_cat.columns = ["_".join(x) for x in grouped_cat.columns.ravel()]
print("Extracted features from categorical data")
print("merging data frames")
merged = grouped_cat.join(grouped_num).sort_values(by=["session_id", "level_group"])
print("dataframes merged")
return merged.reset_index()
def _add_level_group(x):
"""
Supporting function for pandas apply method applied in the function
after this one ---> see below line 119
"""
if x <= 3:
return "0-4"
elif x <= 13:
return "5-12"
elif x <= 22:
return "13-22"
def preparing_training_labels_dataset():
""""""
print("Perparing the training labels dataset")
# reading data file
df_train_labels = read_train_data_labels()
# extracting session id and question number
df_train_labels["question_number"] = df_train_labels["session_id"].apply(
lambda x: x.split("_")[1][1:]
)
df_train_labels["session_id"] = df_train_labels["session_id"].apply(
lambda x: x.split("_")[0]
)
# Conversting the question number column to integer format from object
df_train_labels["question_number"] = df_train_labels["question_number"].apply(int)
# defining level groups
df_train_labels["level_group"] = df_train_labels["question_number"].apply(
_add_level_group
)
# assigning appropriate data types
df_train_labels["session_id"] = df_train_labels["session_id"].apply(int)
return df_train_labels
def creating_complete_dataset_with_labels_features():
import pandas as pd
import calendar
import time
current_GMT = time.gmtime()
time_stamp = calendar.timegm(current_GMT)
df_train_labels = preparing_training_labels_dataset()
merged = aggregating_merging_numeric_and_cat_train_data()
print("Merging the training labels dataset and the features dataset")
final_df = pd.merge(
df_train_labels,
merged,
how="left",
left_on=["session_id", "level_group"],
right_on=["session_id", "level_group"],
)
# print("Saving data with time stamp")
# final_df.to_csv("data/processed_data/" + str(time_stamp)+"_final_df_with_features.csv")
return final_df
def feature_engineer(test, sample_sub):
"""
taking in frames returned by the API and converting them to digestible formats
for submission files and model prediction
"""
# extracting columns pertaining to numeric data
numeric_columns = test.select_dtypes(include=["int64", "float64"]).columns
# grouping by session id and level group as questions are asked at the end of certain levels
# aggregating to create new features per session id
print("Extracting features from numerical data")
grouped_num = test.groupby(["session_id", "level_group"])[numeric_columns[2:]].agg(
["min", "max", "sum", "mean", "var"]
)
grouped_num.columns = ["_".join(x) for x in grouped_num.columns.ravel()]
print("Extracted features from numerical data")
# aggregating categorical features
print("Extracting features from Categorical data")
non_numeric_columns = test.select_dtypes(include=["category", "object"]).columns
grouped_cat = test.groupby(["session_id", "level_group"])[non_numeric_columns].agg(
["unique", "nunique"]
)
grouped_cat.columns = ["_".join(x) for x in grouped_cat.columns.ravel()]
print("Extracted features from categorical data")
print("merging data frames")
merged = grouped_cat.join(grouped_num).sort_values(by=["session_id", "level_group"])
print("dataframes merged")
# modifyng sample submission file
print("Perparing the training labels dataset")
# extracting session id and question number
sample_sub["question_number"] = sample_sub["session_id"].apply(
lambda x: x.split("_")[1][1:]
)
sample_sub["session_id"] = sample_sub["session_id"].apply(lambda x: x.split("_")[0])
# Conversting the question number column to integer format from object
sample_sub["question_number"] = sample_sub["question_number"].apply(int)
# defining level groups
sample_sub["level_group"] = sample_sub["question_number"].apply(_add_level_group)
# assigning appropriate data types
sample_sub["session_id"] = sample_sub["session_id"].apply(int)
final_df = pd.merge(
sample_sub,
merged,
how="left",
left_on=["session_id", "level_group"],
right_on=["session_id", "level_group"],
)
return final_df.reset_index()
df_train_with_labels = creating_complete_dataset_with_labels_features()
df_test_data = aggregating_merging_numeric_and_cat_train_data(
instead_read_test_data=True
)
df_train_with_labels.head()
# # Model Training
def decision_tree_model_sklearn(df):
"""
The purpose of this mode is to serve as a baseline model and indicate
the importance of features. It will be the fist model and will
help understaand the contribution of each features to the
results.
Some of the important questions that need to be answered by performing
EDA and baseline model creation
1. Do we create a seperate model for each question?
2. Which features are important and which new features must be created or added
3. Importance of categorical features needs to be understood, whether
more textual analysis be performed on them or not?
"""
df.fillna(0, inplace=True)
df = df.select_dtypes("number")
# removing uneccessary columns
cols = df.columns[2:]
X = df[cols]
# getting labels
y = df["correct"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, stratify=X["question_number"], random_state=1
)
model = tree.DecisionTreeClassifier(random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = metrics.f1_score(y_test, y_pred)
print("The f1 score of the model is ", accuracy)
return model
sklearn_tree_model = decision_tree_model_sklearn(df_train_with_labels)
# # Inference
def create_df_session_id_question_number_pair(df_test):
"""
The question numbers are in the train labels dataset,
do we have to predict the questions numbers or not?
No we dont have to predict the question numbers,
we have run the model for each question number and session id pair based on
group level.
henceforth creating a pair of session id, question number and
level_group
"""
import numpy as np
import pandas as pd
# getting unique test ids adn duplicating 18 times
# as there are 18 questions for each session id
test_session_id_list = df_test.session_id.unique().tolist() * 18
test_session_id_list.sort()
# creating a list of question number and duplicating
# three times
question_number_list = np.arange(1, 19).tolist() * 3
# creating data frame
ex = pd.DataFrame(
{"session_id": test_session_id_list, "question_number": question_number_list}
)
ex["level_group"] = ex["question_number"].apply(_add_level_group)
return ex
def final_inference_dataset(df_test_agg):
"""creating final inference dataset"""
import pandas as pd
df_ex = create_df_session_id_question_number_pair(df_test_agg)
df_inference = pd.merge(
df_ex,
df_test_agg,
how="left",
left_on=["session_id", "level_group"],
right_on=["session_id", "level_group"],
)
return df_inference
df_inf = final_inference_dataset(df_test_data)
df_inf.info()
df_inf["correct"] = sklearn_tree_model.predict(
df_inf[sklearn_tree_model.feature_names_in_]
)
df_submission = df_inf[["session_id", "question_number", "correct"]]
df_submission["session_id"] = df_submission.apply(
lambda x: str(x["session_id"]) + "_q" + str(x["question_number"]), axis=1
)
df_submission
# import jo_wilder
# env = jo_wilder.make_env()
# iter_test = env.iter_test()
# counter = 0
# # The API will deliver two dataframes in this specific order,
# # for every session+level grouping (one group per session for each checkpoint)
# for (sample_submission, test) in iter_test:
# if counter == 0:
# print(sample_submission.head())
# print(test.head())
# print(test.shape)
# ## users make predictions here using the test data
# sample_submission['correct'] = 0
# ## env.predict appends the session+level sample_submission to the overall
# ## submission
# env.predict(sample_submission)
# counter += 1
import jo_wilder
env = jo_wilder.make_env()
iter_test = env.iter_test()
for test, sample_submission in iter_test:
# FEATURE ENGINEER TEST DATA
df = feature_engineer(test, sample_submission)
p = sklearn_tree_model.predict(df[sklearn_tree_model.feature_names_in_])
df["correct"] = p
sample_submission = df[["session_id", "question_number", "correct"]]
sample_submission["session_id"] = sample_submission.apply(
lambda x: str(x["session_id"]) + "_q" + str(x["question_number"]), axis=1
)
sample_submission = sample_submission[["session_id", "correct"]]
env.predict(sample_submission)
df = pd.read_csv("submission.csv")
df.head()
sklearn_tree_model.feature_names_in_
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/771/129771153.ipynb
| null | null |
[{"Id": 129771153, "ScriptId": 38588956, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6466123, "CreationDate": "05/16/2023 11:01:00", "VersionNumber": 5.0, "Title": "First Submission", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 445.0, "LinesInsertedFromPrevious": 1.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 444.0, "LinesInsertedFromFork": 439.0, "LinesDeletedFromFork": 22.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 6.0, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn import tree
from sklearn.model_selection import train_test_split # Import train_test_split function
from sklearn import metrics
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Loading Data & Feature Extraction
def read_train_data(instead_read_test_data=False):
import pandas as pd
# specifying datatypes and reducing some datatypes to avoid
# memory error
dtypes = {
"elapsed_time": np.int32,
"event_name": "category",
"name": "category",
"level": np.uint8,
"room_coor_x": np.float32,
"room_coor_y": np.float32,
"screen_coor_x": np.float32,
"screen_coor_y": np.float32,
"hover_duration": np.float32,
"text": "category",
"fqid": "category",
"room_fqid": "category",
"text_fqid": "category",
"fullscreen": np.int32,
"hq": np.int32,
"music": np.int32,
"level_group": "category",
}
if instead_read_test_data == True:
print("reading_test_Data file")
df = pd.read_csv(
"/kaggle/input/student-performance-and-game-play/test.csv", dtype=dtypes
)
else:
print("Loading train data")
df = pd.read_csv(
"/kaggle/input/student-performance-and-game-play/train.csv", dtype=dtypes
)
print("Train data Loaded")
return df
def read_train_data_labels():
import pandas as pd
print("Loading train labels")
df = pd.read_csv("/kaggle/input/student-performance-and-game-play/train_labels.csv")
print("Training Labels Loaded")
return df
def aggregating_merging_numeric_and_cat_train_data(
get_all_features=True, instead_read_test_data=False
):
"""
Train data contains data in different formats some are numeric while some
are objects and text fields. The session_id is unique to an individual and the
group_level, however many rows have same session_ids as a single user session can span over
some time creating lots of data. For performing analyses we must create features pertaining
each unique session_id.
"""
df = read_train_data(instead_read_test_data=instead_read_test_data)
# extracting columns pertaining to numeric data
numeric_columns = df.select_dtypes(include=["int64", "float64"]).columns
# grouping by session id and level group as questions are asked at the end of certain levels
# aggregating to create new features per session id
print("Extracting features from numerical data")
if get_all_features == False:
grouped = df.groupby(["session_id", "level_group"])[numeric_columns[2:]].agg(
# getting the count of entries in each group
count=("elapsed_time", "count"),
# Get max of the elapsed time column for each group
max_elapsed_time=("elapsed_time", "max"),
# Get min of the elapsed time column for each group
min_elapsed_time=("elapsed_time", "min"),
# Get min of the elapsed time column for each group
mean_elapsed_time=("elapsed_time", "mean"),
# Get min of the level column for each group
min_level=("level", "min"),
# Get min of the level column for each group
max_level=("level", "max"),
# Get min of the level column for each group
mean_level=("level", "mean"),
)
else:
grouped_num = df.groupby(["session_id", "level_group"])[
numeric_columns[2:]
].agg(["min", "max", "sum", "mean", "var"])
grouped_num.columns = ["_".join(x) for x in grouped_num.columns.ravel()]
print("Extracted features from numerical data")
# aggregating categorical features
print("Extracting features from Categorical data")
non_numeric_columns = df.select_dtypes(include=["category"]).columns
grouped_cat = df.groupby(["session_id", "level_group"])[non_numeric_columns].agg(
["unique", "nunique"]
)
grouped_cat.columns = ["_".join(x) for x in grouped_cat.columns.ravel()]
print("Extracted features from categorical data")
print("merging data frames")
merged = grouped_cat.join(grouped_num).sort_values(by=["session_id", "level_group"])
print("dataframes merged")
return merged.reset_index()
def _add_level_group(x):
"""
Supporting function for pandas apply method applied in the function
after this one ---> see below line 119
"""
if x <= 3:
return "0-4"
elif x <= 13:
return "5-12"
elif x <= 22:
return "13-22"
def preparing_training_labels_dataset():
""""""
print("Perparing the training labels dataset")
# reading data file
df_train_labels = read_train_data_labels()
# extracting session id and question number
df_train_labels["question_number"] = df_train_labels["session_id"].apply(
lambda x: x.split("_")[1][1:]
)
df_train_labels["session_id"] = df_train_labels["session_id"].apply(
lambda x: x.split("_")[0]
)
# Conversting the question number column to integer format from object
df_train_labels["question_number"] = df_train_labels["question_number"].apply(int)
# defining level groups
df_train_labels["level_group"] = df_train_labels["question_number"].apply(
_add_level_group
)
# assigning appropriate data types
df_train_labels["session_id"] = df_train_labels["session_id"].apply(int)
return df_train_labels
def creating_complete_dataset_with_labels_features():
import pandas as pd
import calendar
import time
current_GMT = time.gmtime()
time_stamp = calendar.timegm(current_GMT)
df_train_labels = preparing_training_labels_dataset()
merged = aggregating_merging_numeric_and_cat_train_data()
print("Merging the training labels dataset and the features dataset")
final_df = pd.merge(
df_train_labels,
merged,
how="left",
left_on=["session_id", "level_group"],
right_on=["session_id", "level_group"],
)
# print("Saving data with time stamp")
# final_df.to_csv("data/processed_data/" + str(time_stamp)+"_final_df_with_features.csv")
return final_df
def feature_engineer(test, sample_sub):
"""
taking in frames returned by the API and converting them to digestible formats
for submission files and model prediction
"""
# extracting columns pertaining to numeric data
numeric_columns = test.select_dtypes(include=["int64", "float64"]).columns
# grouping by session id and level group as questions are asked at the end of certain levels
# aggregating to create new features per session id
print("Extracting features from numerical data")
grouped_num = test.groupby(["session_id", "level_group"])[numeric_columns[2:]].agg(
["min", "max", "sum", "mean", "var"]
)
grouped_num.columns = ["_".join(x) for x in grouped_num.columns.ravel()]
print("Extracted features from numerical data")
# aggregating categorical features
print("Extracting features from Categorical data")
non_numeric_columns = test.select_dtypes(include=["category", "object"]).columns
grouped_cat = test.groupby(["session_id", "level_group"])[non_numeric_columns].agg(
["unique", "nunique"]
)
grouped_cat.columns = ["_".join(x) for x in grouped_cat.columns.ravel()]
print("Extracted features from categorical data")
print("merging data frames")
merged = grouped_cat.join(grouped_num).sort_values(by=["session_id", "level_group"])
print("dataframes merged")
# modifyng sample submission file
print("Perparing the training labels dataset")
# extracting session id and question number
sample_sub["question_number"] = sample_sub["session_id"].apply(
lambda x: x.split("_")[1][1:]
)
sample_sub["session_id"] = sample_sub["session_id"].apply(lambda x: x.split("_")[0])
# Conversting the question number column to integer format from object
sample_sub["question_number"] = sample_sub["question_number"].apply(int)
# defining level groups
sample_sub["level_group"] = sample_sub["question_number"].apply(_add_level_group)
# assigning appropriate data types
sample_sub["session_id"] = sample_sub["session_id"].apply(int)
final_df = pd.merge(
sample_sub,
merged,
how="left",
left_on=["session_id", "level_group"],
right_on=["session_id", "level_group"],
)
return final_df.reset_index()
df_train_with_labels = creating_complete_dataset_with_labels_features()
df_test_data = aggregating_merging_numeric_and_cat_train_data(
instead_read_test_data=True
)
df_train_with_labels.head()
# # Model Training
def decision_tree_model_sklearn(df):
"""
The purpose of this mode is to serve as a baseline model and indicate
the importance of features. It will be the fist model and will
help understaand the contribution of each features to the
results.
Some of the important questions that need to be answered by performing
EDA and baseline model creation
1. Do we create a seperate model for each question?
2. Which features are important and which new features must be created or added
3. Importance of categorical features needs to be understood, whether
more textual analysis be performed on them or not?
"""
df.fillna(0, inplace=True)
df = df.select_dtypes("number")
# removing uneccessary columns
cols = df.columns[2:]
X = df[cols]
# getting labels
y = df["correct"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, stratify=X["question_number"], random_state=1
)
model = tree.DecisionTreeClassifier(random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = metrics.f1_score(y_test, y_pred)
print("The f1 score of the model is ", accuracy)
return model
sklearn_tree_model = decision_tree_model_sklearn(df_train_with_labels)
# # Inference
def create_df_session_id_question_number_pair(df_test):
"""
The question numbers are in the train labels dataset,
do we have to predict the questions numbers or not?
No we dont have to predict the question numbers,
we have run the model for each question number and session id pair based on
group level.
henceforth creating a pair of session id, question number and
level_group
"""
import numpy as np
import pandas as pd
# getting unique test ids adn duplicating 18 times
# as there are 18 questions for each session id
test_session_id_list = df_test.session_id.unique().tolist() * 18
test_session_id_list.sort()
# creating a list of question number and duplicating
# three times
question_number_list = np.arange(1, 19).tolist() * 3
# creating data frame
ex = pd.DataFrame(
{"session_id": test_session_id_list, "question_number": question_number_list}
)
ex["level_group"] = ex["question_number"].apply(_add_level_group)
return ex
def final_inference_dataset(df_test_agg):
"""creating final inference dataset"""
import pandas as pd
df_ex = create_df_session_id_question_number_pair(df_test_agg)
df_inference = pd.merge(
df_ex,
df_test_agg,
how="left",
left_on=["session_id", "level_group"],
right_on=["session_id", "level_group"],
)
return df_inference
df_inf = final_inference_dataset(df_test_data)
df_inf.info()
df_inf["correct"] = sklearn_tree_model.predict(
df_inf[sklearn_tree_model.feature_names_in_]
)
df_submission = df_inf[["session_id", "question_number", "correct"]]
df_submission["session_id"] = df_submission.apply(
lambda x: str(x["session_id"]) + "_q" + str(x["question_number"]), axis=1
)
df_submission
# import jo_wilder
# env = jo_wilder.make_env()
# iter_test = env.iter_test()
# counter = 0
# # The API will deliver two dataframes in this specific order,
# # for every session+level grouping (one group per session for each checkpoint)
# for (sample_submission, test) in iter_test:
# if counter == 0:
# print(sample_submission.head())
# print(test.head())
# print(test.shape)
# ## users make predictions here using the test data
# sample_submission['correct'] = 0
# ## env.predict appends the session+level sample_submission to the overall
# ## submission
# env.predict(sample_submission)
# counter += 1
import jo_wilder
env = jo_wilder.make_env()
iter_test = env.iter_test()
for test, sample_submission in iter_test:
# FEATURE ENGINEER TEST DATA
df = feature_engineer(test, sample_submission)
p = sklearn_tree_model.predict(df[sklearn_tree_model.feature_names_in_])
df["correct"] = p
sample_submission = df[["session_id", "question_number", "correct"]]
sample_submission["session_id"] = sample_submission.apply(
lambda x: str(x["session_id"]) + "_q" + str(x["question_number"]), axis=1
)
sample_submission = sample_submission[["session_id", "correct"]]
env.predict(sample_submission)
df = pd.read_csv("submission.csv")
df.head()
sklearn_tree_model.feature_names_in_
| false | 0 | 3,779 | 0 | 3,779 | 3,779 |
||
129336563
|
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold
from lightgbm import LGBMClassifier
from optuna.integration import lightgbm as lgb
from sklearn.metrics import log_loss
from sklearn.preprocessing import MinMaxScaler
from lightgbm import early_stopping
import matplotlib.pyplot as plt
import seaborn as sns
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
test = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
sub = pd.read_csv(
"/kaggle/input/icr-identify-age-related-conditions/sample_submission.csv"
)
cols = [e for e in train.columns if e not in ["Id", "Class"]]
# ## Exploratory Data Analysis
train.head()
test.head()
train.info()
# Check if there'is null values
train.isnull().sum()
train[cols].describe()
## Target distibution
pie, ax = plt.subplots(figsize=[12, 4])
train.groupby("Class").size().plot(
kind="pie", autopct="%.1f", ax=ax, title="Target distibution"
)
# ## Feature Engineering
train["EJ"] = train["EJ"].map({"A": 0, "B": 1})
test["EJ"] = test["EJ"].map({"A": 0, "B": 1})
scaler = MinMaxScaler()
train[cols] = scaler.fit_transform(train[cols])
test[cols] = scaler.transform(test[cols])
# ## Let's build a lightgbm model
# I optained these parameters using OPTUNA
params = {
"reg_alpha": 0.10874324351573438,
"reg_lambda": 0.1814038210181044,
"colsample_bytree": 0.8,
"subsample": 1.0,
"learning_rate": 0.15,
"max_depth": 100,
"num_leaves": 219,
"min_child_samples": 50,
"cat_smooth": 64,
"metric": "logloss",
"random_state": 48,
"n_estimators": 20000,
}
preds = 0
kf = StratifiedKFold(n_splits=5, random_state=42, shuffle=True)
metric = [] # list contains log loss for each fold
n = 0
for trn_idx, test_idx in kf.split(train[cols], train["Class"]):
X_tr, X_val = train[cols].iloc[trn_idx], train[cols].iloc[test_idx]
y_tr, y_val = train["Class"].iloc[trn_idx], train["Class"].iloc[test_idx]
model = LGBMClassifier(**params)
model.fit(
X_tr,
y_tr,
eval_set=[(X_val, y_val)],
callbacks=[early_stopping(100, verbose=False)],
eval_metric="logloss",
)
preds += model.predict_proba(test[cols]) / kf.n_splits
metric.append(log_loss(y_val, model.predict_proba(X_val)))
print(f"fold: {n+1} , log loss: {round(metric[n],3)}")
n += 1
print(f"The average log loss : {np.mean(metric)}")
# most 10 important features for lgb model
lgb.plot_importance(model, max_num_features=10, figsize=(6, 6))
plt.show()
# ## Let's Make a Submission
sub[["class_0", "class_1"]] = preds
sub.to_csv("submission.csv", index=False)
sub
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/336/129336563.ipynb
| null | null |
[{"Id": 129336563, "ScriptId": 38454213, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 3489917, "CreationDate": "05/12/2023 22:58:58", "VersionNumber": 2.0, "Title": "ICR | LightGBM \ud83c\udfc4", "EvaluationDate": "05/12/2023", "IsChange": false, "TotalLines": 75.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 75.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold
from lightgbm import LGBMClassifier
from optuna.integration import lightgbm as lgb
from sklearn.metrics import log_loss
from sklearn.preprocessing import MinMaxScaler
from lightgbm import early_stopping
import matplotlib.pyplot as plt
import seaborn as sns
train = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/train.csv")
test = pd.read_csv("/kaggle/input/icr-identify-age-related-conditions/test.csv")
sub = pd.read_csv(
"/kaggle/input/icr-identify-age-related-conditions/sample_submission.csv"
)
cols = [e for e in train.columns if e not in ["Id", "Class"]]
# ## Exploratory Data Analysis
train.head()
test.head()
train.info()
# Check if there'is null values
train.isnull().sum()
train[cols].describe()
## Target distibution
pie, ax = plt.subplots(figsize=[12, 4])
train.groupby("Class").size().plot(
kind="pie", autopct="%.1f", ax=ax, title="Target distibution"
)
# ## Feature Engineering
train["EJ"] = train["EJ"].map({"A": 0, "B": 1})
test["EJ"] = test["EJ"].map({"A": 0, "B": 1})
scaler = MinMaxScaler()
train[cols] = scaler.fit_transform(train[cols])
test[cols] = scaler.transform(test[cols])
# ## Let's build a lightgbm model
# I optained these parameters using OPTUNA
params = {
"reg_alpha": 0.10874324351573438,
"reg_lambda": 0.1814038210181044,
"colsample_bytree": 0.8,
"subsample": 1.0,
"learning_rate": 0.15,
"max_depth": 100,
"num_leaves": 219,
"min_child_samples": 50,
"cat_smooth": 64,
"metric": "logloss",
"random_state": 48,
"n_estimators": 20000,
}
preds = 0
kf = StratifiedKFold(n_splits=5, random_state=42, shuffle=True)
metric = [] # list contains log loss for each fold
n = 0
for trn_idx, test_idx in kf.split(train[cols], train["Class"]):
X_tr, X_val = train[cols].iloc[trn_idx], train[cols].iloc[test_idx]
y_tr, y_val = train["Class"].iloc[trn_idx], train["Class"].iloc[test_idx]
model = LGBMClassifier(**params)
model.fit(
X_tr,
y_tr,
eval_set=[(X_val, y_val)],
callbacks=[early_stopping(100, verbose=False)],
eval_metric="logloss",
)
preds += model.predict_proba(test[cols]) / kf.n_splits
metric.append(log_loss(y_val, model.predict_proba(X_val)))
print(f"fold: {n+1} , log loss: {round(metric[n],3)}")
n += 1
print(f"The average log loss : {np.mean(metric)}")
# most 10 important features for lgb model
lgb.plot_importance(model, max_num_features=10, figsize=(6, 6))
plt.show()
# ## Let's Make a Submission
sub[["class_0", "class_1"]] = preds
sub.to_csv("submission.csv", index=False)
sub
| false | 0 | 964 | 0 | 964 | 964 |
||
129336037
|
<jupyter_start><jupyter_text>Mushroom Classification
### Context
Although this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as "shrooming") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?
### Content
This dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like "leaflets three, let it be'' for Poisonous Oak and Ivy.
- **Time period**: Donated to UCI ML 27 April 1987
### Inspiration
- What types of machine learning models perform best on this dataset?
- Which features are most indicative of a poisonous mushroom?
Kaggle dataset identifier: mushroom-classification
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
df = pd.read_csv("../input/mushroom-classification/mushrooms.csv")
encoder = LabelEncoder()
df = df.apply(encoder.fit_transform)
df.head()
X = df.drop(columns=["class"])
y = df["class"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print("X_train = ", X_train.shape)
print("y_train = ", y_train.shape)
print("X_test = ", X_test.shape)
print("y_test = ", y_test.shape)
def prior(y_train, label):
total_points = y_train.shape[0]
class_points = np.sum(y_train == label)
return class_points / float(total_points)
def cond_prob(X_train, y_train, feat_col, feat_val, label):
X_filtered = X_train[y_train == label]
numerator = np.sum(X_filtered[feat_col] == feat_val)
denominator = np.sum(y_train == label)
return numerator / float(denominator)
def predict(X_train, y_train, xtest):
classes = np.unique(y_train)
features = [x for x in X_train.columns]
post_probs = []
for label in classes:
likelihood = 1.0
for f in features:
cond = cond_prob(X_train, y_train, f, xtest[f], label)
likelihood *= cond
prior_prob = prior(y_train, label)
posterior = prior_prob * likelihood
post_probs.append(posterior)
prediction = np.argmax(post_probs)
return prediction
rand_example = 6
output = predict(X_train, y_train, X_test.iloc[rand_example])
print("Naive Bayes Classifier predicts ", output)
print("Current Answer ", y_test.iloc[rand_example])
def accuracy_score(X_train, y_train, xtest, ytest):
preds = []
for i in range(xtest.shape[0]):
pred_label = predict(X_train, y_train, xtest.iloc[i])
preds.append(pred_label)
preds = np.array(preds)
accuracy = np.sum(preds == ytest) / ytest.shape[0]
return accuracy
print(
"Accuracy Score for our classifier == ",
accuracy_score(X_train, y_train, X_test, y_test),
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/336/129336037.ipynb
|
mushroom-classification
| null |
[{"Id": 129336037, "ScriptId": 38454182, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15078612, "CreationDate": "05/12/2023 22:47:33", "VersionNumber": 1.0, "Title": "Bayesian method for mushroom classification", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 90.0, "LinesInsertedFromPrevious": 90.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185284749, "KernelVersionId": 129336037, "SourceDatasetVersionId": 974}]
|
[{"Id": 974, "DatasetId": 478, "DatasourceVersionId": 974, "CreatorUserId": 495305, "LicenseName": "CC0: Public Domain", "CreationDate": "12/01/2016 23:08:00", "VersionNumber": 1.0, "Title": "Mushroom Classification", "Slug": "mushroom-classification", "Subtitle": "Safe to eat or deadly poison?", "Description": "### Context\n\nAlthough this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as \"shrooming\") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?\n\n### Content \n\nThis dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like \"leaflets three, let it be'' for Poisonous Oak and Ivy.\n\n- **Time period**: Donated to UCI ML 27 April 1987\n\n### Inspiration\n\n- What types of machine learning models perform best on this dataset?\n\n- Which features are most indicative of a poisonous mushroom?\n\n### Acknowledgements\n\nThis dataset was originally donated to the UCI Machine Learning repository. You can learn more about past research using the data [here][1]. \n\n#[Start a new kernel][2]\n\n\n [1]: https://archive.ics.uci.edu/ml/datasets/Mushroom\n [2]: https://www.kaggle.com/uciml/mushroom-classification/kernels?modal=true", "VersionNotes": "Initial release", "TotalCompressedBytes": 374003.0, "TotalUncompressedBytes": 374003.0}]
|
[{"Id": 478, "CreatorUserId": 495305, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 974.0, "CurrentDatasourceVersionId": 974.0, "ForumId": 2099, "Type": 2, "CreationDate": "12/01/2016 23:08:00", "LastActivityDate": "02/06/2018", "TotalViews": 873597, "TotalDownloads": 114985, "TotalVotes": 2206, "TotalKernels": 1371}]
| null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
df = pd.read_csv("../input/mushroom-classification/mushrooms.csv")
encoder = LabelEncoder()
df = df.apply(encoder.fit_transform)
df.head()
X = df.drop(columns=["class"])
y = df["class"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print("X_train = ", X_train.shape)
print("y_train = ", y_train.shape)
print("X_test = ", X_test.shape)
print("y_test = ", y_test.shape)
def prior(y_train, label):
total_points = y_train.shape[0]
class_points = np.sum(y_train == label)
return class_points / float(total_points)
def cond_prob(X_train, y_train, feat_col, feat_val, label):
X_filtered = X_train[y_train == label]
numerator = np.sum(X_filtered[feat_col] == feat_val)
denominator = np.sum(y_train == label)
return numerator / float(denominator)
def predict(X_train, y_train, xtest):
classes = np.unique(y_train)
features = [x for x in X_train.columns]
post_probs = []
for label in classes:
likelihood = 1.0
for f in features:
cond = cond_prob(X_train, y_train, f, xtest[f], label)
likelihood *= cond
prior_prob = prior(y_train, label)
posterior = prior_prob * likelihood
post_probs.append(posterior)
prediction = np.argmax(post_probs)
return prediction
rand_example = 6
output = predict(X_train, y_train, X_test.iloc[rand_example])
print("Naive Bayes Classifier predicts ", output)
print("Current Answer ", y_test.iloc[rand_example])
def accuracy_score(X_train, y_train, xtest, ytest):
preds = []
for i in range(xtest.shape[0]):
pred_label = predict(X_train, y_train, xtest.iloc[i])
preds.append(pred_label)
preds = np.array(preds)
accuracy = np.sum(preds == ytest) / ytest.shape[0]
return accuracy
print(
"Accuracy Score for our classifier == ",
accuracy_score(X_train, y_train, X_test, y_test),
)
| false | 0 | 830 | 0 | 1,133 | 830 |
||
129336616
|
<jupyter_start><jupyter_text>US Accidents (2016 - 2023)
### Description
This is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset.
Kaggle dataset identifier: us-accidents
<jupyter_script># # Accidentes Automovilísticos en Estados Unidos (2016-2021)
import pandas as pd
import plotly.express as px
import numpy as np
import plotly.graph_objects as go
datos = pd.read_csv("/kaggle/input/us-accidents/US_Accidents_Dec21_updated.csv")
display(datos)
# ## Columnas.
print(datos.columns)
# ## Valores Duplicados
print(datos.duplicated())
# ## Suma de Valores Duplicados
print(datos.duplicated().sum())
# ## Tipos de Datos. (Completo)
print(datos.dtypes)
# ## Valores Nulos
b = pd.isnull(datos)
print(b.sum())
# ## Estadisticas Descriptivas
display(datos.describe())
# ## Información de Datos
display(datos.info)
# ## Uso de Filtros. (Reducción de Tamaño de Archivo)
datos_r = datos.drop(
labels=[
"Start_Lat",
"End_Lat",
"Start_Lng",
"End_Lng",
"Amenity",
"Bump",
"Crossing",
"Give_Way",
"Junction",
"No_Exit",
"Railway",
"Roundabout",
"Station",
"Stop",
"Traffic_Calming",
"Traffic_Signal",
"Turning_Loop",
"Sunrise_Sunset",
"Civil_Twilight",
"Nautical_Twilight",
"Astronomical_Twilight",
],
axis=1,
inplace=False,
)
display(datos_r)
# ## Columnas (Reducido)
print(datos_r.columns)
# ## Tamaño de Tabla: Número de Accidentes.
print(datos_r.shape[0])
# ## Correlación entre severidad y visibilidad
graph_1 = px.scatter(
datos_r,
x="Severity",
y="Visibility(mi)",
title="Severidad de accidentes por rango de visibilidad",
)
graph_1.show()
# ## Histograma
graph_2 = px.histogram(
datos_r, x="Severity", nbins=4, title="Severidad de Accidentes Registrados"
)
graph_2.show()
# ## Gráfica 3 (No Correr Hasta Presentar)
##grafica_3=px.box(datos_r, x = "Temperature(F)", y = "Visibility(mi)", color = "Severity")
# ## Matriz de Correlación
##corr_matrix= datos.corr()
##graph_2 = go.Figure(data = go.Heatmap (x = corr_matrix.columns,
##y = corr_matrix.columns,
##z = corr_matrix.values, colorscale = "viridis"))
##graph_2.update_layout(title = "Matriz de Correlación")
##graph_2.show()
# # K-means
# ## Remplazar Valores "NP.NAN" con Cero
datos_r["Temperature(F)"] = datos_r["Temperature(F)"].replace(np.nan, 0)
datos_r["Severity"] = datos_r["Severity"].replace(np.nan, 0)
datos_r["Humidity(%)"] = datos_r["Humidity(%)"].replace(np.nan, 0)
# ## Carga de Conjunto de Datos
X = datos_r.iloc[:, [1, 17]].values
print(X)
# ## Método de Codo.
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, init="k-means++", random_state=42)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title("Método del codo")
plt.xlabel("Numero de clusters}")
plt.ylabel("WCSS")
plt.show()
# ## Creación de K-means.
#
kmeans = KMeans(n_clusters=3, init="k-means++", random_state=42)
y_kmeans = kmeans.fit_predict(X)
plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s=100, c="red", label="Cluster 1")
plt.scatter(
X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s=100, c="blue", label="Cluster 2"
)
plt.scatter(
X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s=100, c="green", label="Cluster 3"
)
plt.scatter(
kmeans.cluster_centers_[:, 0],
kmeans.cluster_centers_[:, 1],
s=300,
c="yellow",
label="Centroides",
)
plt.title("Clusters of customers")
plt.xlabel("Severidad")
plt.ylabel("Visibilidad")
plt.legend()
plt.show()
# ## K-means con Datos de Clima (Temperatura y Humedad)
Z = datos_r.iloc[:, [17, 19]].values
print(Z)
# ## Metodo de Codo
warnings.filterwarnings("ignore")
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, init="k-means++", random_state=42)
kmeans.fit(Z)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title("Método del codo")
plt.xlabel("Numero de clusters}")
plt.ylabel("WCSS")
plt.show()
# ## K-means
kmeans = KMeans(n_clusters=4, init="k-means++", random_state=42)
y_kmeans = kmeans.fit_predict(Z)
plt.scatter(Z[y_kmeans == 0, 0], Z[y_kmeans == 0, 1], s=100, c="red", label="Cluster 1")
plt.scatter(
Z[y_kmeans == 1, 0], Z[y_kmeans == 1, 1], s=100, c="blue", label="Cluster 2"
)
plt.scatter(
Z[y_kmeans == 2, 0], Z[y_kmeans == 2, 1], s=100, c="green", label="Cluster 3"
)
plt.scatter(
Z[y_kmeans == 3, 0], Z[y_kmeans == 3, 1], s=100, c="magenta", label="Cluster 3"
)
plt.scatter(
kmeans.cluster_centers_[:, 0],
kmeans.cluster_centers_[:, 1],
s=300,
c="yellow",
label="Centroides",
)
plt.title("Clusters of customers")
plt.xlabel("Temperatura")
plt.ylabel("Humedad")
plt.legend()
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/336/129336616.ipynb
|
us-accidents
|
sobhanmoosavi
|
[{"Id": 129336616, "ScriptId": 38451908, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15008359, "CreationDate": "05/12/2023 23:00:08", "VersionNumber": 3.0, "Title": "Accidentes_Automovilisticos", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 172.0, "LinesInsertedFromPrevious": 106.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 66.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185285471, "KernelVersionId": 129336616, "SourceDatasetVersionId": 3286750}]
|
[{"Id": 3286750, "DatasetId": 199387, "DatasourceVersionId": 3337413, "CreatorUserId": 348067, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "03/12/2022 07:23:02", "VersionNumber": 12.0, "Title": "US Accidents (2016 - 2023)", "Slug": "us-accidents", "Subtitle": "A Countrywide Traffic Accident Dataset (2016 - 2023)", "Description": "### Description\nThis is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset. \n\n### Acknowledgements\nPlease cite the following papers if you use this dataset: \n\n- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. \u201c[A Countrywide Traffic Accident Dataset](https://arxiv.org/abs/1906.05409).\u201d, 2019.\n\n- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. [\"Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights.\"](https://arxiv.org/abs/1909.09638) In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019. \n\n### Content\nThis dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset. \n\n### Inspiration\nUS-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents. \n\n### Usage Policy and Legal Disclaimer\nThis dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset.\n\n### Inquiries or need help?\nFor any inquiries, contact me at [email protected]", "VersionNotes": "Data Update 2022/03/12", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 199387, "CreatorUserId": 348067, "OwnerUserId": 348067.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5793796.0, "CurrentDatasourceVersionId": 5870478.0, "ForumId": 210356, "Type": 2, "CreationDate": "05/20/2019 23:26:06", "LastActivityDate": "05/20/2019", "TotalViews": 697710, "TotalDownloads": 94299, "TotalVotes": 1910, "TotalKernels": 330}]
|
[{"Id": 348067, "UserName": "sobhanmoosavi", "DisplayName": "Sobhan Moosavi", "RegisterDate": "05/06/2015", "PerformanceTier": 2}]
|
# # Accidentes Automovilísticos en Estados Unidos (2016-2021)
import pandas as pd
import plotly.express as px
import numpy as np
import plotly.graph_objects as go
datos = pd.read_csv("/kaggle/input/us-accidents/US_Accidents_Dec21_updated.csv")
display(datos)
# ## Columnas.
print(datos.columns)
# ## Valores Duplicados
print(datos.duplicated())
# ## Suma de Valores Duplicados
print(datos.duplicated().sum())
# ## Tipos de Datos. (Completo)
print(datos.dtypes)
# ## Valores Nulos
b = pd.isnull(datos)
print(b.sum())
# ## Estadisticas Descriptivas
display(datos.describe())
# ## Información de Datos
display(datos.info)
# ## Uso de Filtros. (Reducción de Tamaño de Archivo)
datos_r = datos.drop(
labels=[
"Start_Lat",
"End_Lat",
"Start_Lng",
"End_Lng",
"Amenity",
"Bump",
"Crossing",
"Give_Way",
"Junction",
"No_Exit",
"Railway",
"Roundabout",
"Station",
"Stop",
"Traffic_Calming",
"Traffic_Signal",
"Turning_Loop",
"Sunrise_Sunset",
"Civil_Twilight",
"Nautical_Twilight",
"Astronomical_Twilight",
],
axis=1,
inplace=False,
)
display(datos_r)
# ## Columnas (Reducido)
print(datos_r.columns)
# ## Tamaño de Tabla: Número de Accidentes.
print(datos_r.shape[0])
# ## Correlación entre severidad y visibilidad
graph_1 = px.scatter(
datos_r,
x="Severity",
y="Visibility(mi)",
title="Severidad de accidentes por rango de visibilidad",
)
graph_1.show()
# ## Histograma
graph_2 = px.histogram(
datos_r, x="Severity", nbins=4, title="Severidad de Accidentes Registrados"
)
graph_2.show()
# ## Gráfica 3 (No Correr Hasta Presentar)
##grafica_3=px.box(datos_r, x = "Temperature(F)", y = "Visibility(mi)", color = "Severity")
# ## Matriz de Correlación
##corr_matrix= datos.corr()
##graph_2 = go.Figure(data = go.Heatmap (x = corr_matrix.columns,
##y = corr_matrix.columns,
##z = corr_matrix.values, colorscale = "viridis"))
##graph_2.update_layout(title = "Matriz de Correlación")
##graph_2.show()
# # K-means
# ## Remplazar Valores "NP.NAN" con Cero
datos_r["Temperature(F)"] = datos_r["Temperature(F)"].replace(np.nan, 0)
datos_r["Severity"] = datos_r["Severity"].replace(np.nan, 0)
datos_r["Humidity(%)"] = datos_r["Humidity(%)"].replace(np.nan, 0)
# ## Carga de Conjunto de Datos
X = datos_r.iloc[:, [1, 17]].values
print(X)
# ## Método de Codo.
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, init="k-means++", random_state=42)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title("Método del codo")
plt.xlabel("Numero de clusters}")
plt.ylabel("WCSS")
plt.show()
# ## Creación de K-means.
#
kmeans = KMeans(n_clusters=3, init="k-means++", random_state=42)
y_kmeans = kmeans.fit_predict(X)
plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s=100, c="red", label="Cluster 1")
plt.scatter(
X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s=100, c="blue", label="Cluster 2"
)
plt.scatter(
X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s=100, c="green", label="Cluster 3"
)
plt.scatter(
kmeans.cluster_centers_[:, 0],
kmeans.cluster_centers_[:, 1],
s=300,
c="yellow",
label="Centroides",
)
plt.title("Clusters of customers")
plt.xlabel("Severidad")
plt.ylabel("Visibilidad")
plt.legend()
plt.show()
# ## K-means con Datos de Clima (Temperatura y Humedad)
Z = datos_r.iloc[:, [17, 19]].values
print(Z)
# ## Metodo de Codo
warnings.filterwarnings("ignore")
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, init="k-means++", random_state=42)
kmeans.fit(Z)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title("Método del codo")
plt.xlabel("Numero de clusters}")
plt.ylabel("WCSS")
plt.show()
# ## K-means
kmeans = KMeans(n_clusters=4, init="k-means++", random_state=42)
y_kmeans = kmeans.fit_predict(Z)
plt.scatter(Z[y_kmeans == 0, 0], Z[y_kmeans == 0, 1], s=100, c="red", label="Cluster 1")
plt.scatter(
Z[y_kmeans == 1, 0], Z[y_kmeans == 1, 1], s=100, c="blue", label="Cluster 2"
)
plt.scatter(
Z[y_kmeans == 2, 0], Z[y_kmeans == 2, 1], s=100, c="green", label="Cluster 3"
)
plt.scatter(
Z[y_kmeans == 3, 0], Z[y_kmeans == 3, 1], s=100, c="magenta", label="Cluster 3"
)
plt.scatter(
kmeans.cluster_centers_[:, 0],
kmeans.cluster_centers_[:, 1],
s=300,
c="yellow",
label="Centroides",
)
plt.title("Clusters of customers")
plt.xlabel("Temperatura")
plt.ylabel("Humedad")
plt.legend()
plt.show()
| false | 1 | 1,816 | 0 | 1,992 | 1,816 |
||
129336895
|
<jupyter_start><jupyter_text>Predict the positions and speeds of 600 satellites
## **Background**
Within the past two decades, the number of resident space objects (RSOs - artificial objects that are in orbit around the Earth) has nearly doubled, from around 11000 objects in the year 2000 to around 19500 objects in 2019. This number is expected to rise even higher as more satellites are put into space, thanks to improvements in satellite technology and lower costs of production. On the other hand, the increase in the number of RSOs also indirectly increases the risk of collision between them. The important issue here is the reliable and accurate orbit tracking of satellites over sufficiently long periods of time.
Failure to address this issue has led to incidents such as the collision between the active US Iridium-33 communication satellite, and the inactive Russian Kosmos-2251 communication satellite in February 2009. In fact, this accident increased the amount of space debris by 13%, as shown in the figure below:

More accidents will result in more debris being produced, and through a chain reaction of collisions (if left unchecked), may lead to a dire situation in which it becomes difficult or downright impossible to put a satellite into orbit due to the large accumulation of space debris surrounding the Earth. This scenario is known as the **Kessler Syndrome**. Thus, considering the gravity of the situation at hand, it is imperative to prevent such catastrophic collisions from ever happening again.
## **Context**
The aim is to use machine learning or other forecasting algorithms to predict the positions and speeds of 600 satellites in orbit around the Earth. The original datasets were obtained from the **International Data Analytics Olympiad 2020 (IDAO 2020) Competition**, provided by the Russian Astronomical Science Centre. Being a competition, IDAO does not provide the answer key for their `test` dataset. However, I prepared their `train` dataset in such a way that testing one's algorithm can easily be done via the `answer_key`. This preparation is outlined in the notebook associated with this dataset ("Data Preparation to Produce Derived Datasets").
Satellite positions and speeds (henceforth, they will be collectively referred to as the **"kinematic states"**) can be measured using different methods, including simulations. In this dataset, there are two kinds of simulators: the precise simulator and the imprecise simulator. We refer to measurements made using the precise simulator as the **"true"** kinematic states of the satellite and measurements made using the imprecise simulator as the **"simulated"** kinematic states.
The **aim** is to make predictions for the true kinematic states of 600 satellites in the final 7 days of January 2014.
## **Datasets Description**
- The `jan_train` dataset contains data on **both true and simulated** kinematic states of 600 satellites for the 24-day period of time from 01-Jan-2014 00:00 to 24-Jan-2014 23:59.
- The `jan_test` dataset contains data on **only the simulated** kinematic states of the same 600 satellites for the final 7-day period of time from 25-Jan-2014 00:00 to 30-Jan-2014 23:59.
- The `answer_key` dataset contains data on **only the true** kinematic states of the same 600 satellites for the final 7-day period of time from 25-Jan-2014 00:00 to 30-Jan-2014 23:59. **NOTE:** The `answer_key` should only be used for the purpose of evaluation, **NOT** for training.
## **Variables Description**
- `id` (integer): unique row identifier
- `epoch` (datetime): datetime (at the instant of measurement) in "%Y-%m-%d %H:%M:%S.%f" format (e.g. 2014-01-27 18:28:18.284)
- `sat_id` (integer): unique satellite identifier, ranging from 0 to 599
- `x`, `y`, `z` (float): the true position coordinates of a satellite (km)
- `Vx`, `Vy`, `Vz` (float): the true speeds of a satellite, measured along the respective axes (km/s)
- `x_sim`, `y_sim`, `z_sim` (float): the simulated position coordinates of a satellite (km)
- `Vx_sim`, `Vy_sim`, `Vz_sim` (float): the simulated speeds of a satellite, measured along the respective axes (km/s)
## **Tasks**
1. Start by exploring the dataset as a whole, getting familiar with the data contained within.
2. Choose one satellite to explore in more detail, looking at both true and simulated kinematic states
3. Look at the periodicity and pattern of each kinematic state, and think about whether or not they will be useful in a predictive model.
4. Develop a machine learning model to forecast over the final 7 days for each kinematic state of the chosen satellite.
5. Repeat the process for the remaining 599 satellites, keeping in mind that there could be differences between them. Otherwise, choose the satellites that are most interesting or those that are quite different from each other.
## **Acknowledgements**
All ownership belongs to the team, experts and organizers of the IDAO 2020 competition and the Russian Astronomical Science Centre.
Original Owner of Datasets (IDAO 2020): https://idao.world/
License (Yandex Disk): https://yandex.com/legal/disk_termsofuse/
Database (where original data is stored): https://yadi.sk/d/0zYx00gSraxZ3w
Kaggle dataset identifier: predict-the-positions-and-speeds-of-600-satellites
<jupyter_script>import numpy as np
import cudf as pd
import seaborn as sns
import matplotlib.pyplot as plt
from cuml import preprocessing
from sklearn.metrics import precision_recall_fscore_support as score
from cuml.metrics import *
from cuml import *
from cuml.ensemble import RandomForestRegressor
from cuml.metrics import r2_score
train = pd.read_csv("jan_train.csv")
test = pd.read_csv("jan_test.csv")
key = pd.read_csv("answer_key.csv")
train.info()
train.epoch = pd.to_datetime(train.epoch)
sns.pairplot(train)
train.columns
X = train[
["id", "sat_id", "x_sim", "y_sim", "z_sim", "Vx_sim", "Vy_sim", "Vz_sim"]
].copy()
y = train[["x", "y", "z", "Vx", "Vy", "Vz"]]
min_max_scaler = preprocessing.MinMaxScaler()
X_scale = min_max_scaler.fit_transform(X)
X_scale
X_train, Y_train = X_scale, y
print(X_train.shape, Y_train.shape)
Y_train
Y_train_1 = Y_train.iloc[:, 0]
Y_train_2 = Y_train.iloc[:, 1]
Y_train_3 = Y_train.iloc[:, 2]
Y_train_4 = Y_train.iloc[:, 3]
Y_train_5 = Y_train.iloc[:, 4]
Y_train_6 = Y_train.iloc[:, 5]
Y_train_1
regressor_1 = RandomForestRegressor(n_estimators=100)
regressor_1.fit(X_train, Y_train_1)
regressor_2 = RandomForestRegressor(n_estimators=100)
regressor_2.fit(X_train, Y_train_2)
regressor_3 = RandomForestRegressor(n_estimators=100)
regressor_3.fit(X_train, Y_train_3)
regressor_4 = RandomForestRegressor(n_estimators=100)
regressor_4.fit(X_train, Y_train_4)
regressor_5 = RandomForestRegressor(n_estimators=100)
regressor_5.fit(X_train, Y_train_5)
regressor_6 = RandomForestRegressor(n_estimators=100)
regressor_6.fit(X_train, Y_train_6)
test
# Preprocess test data
min_max_scaler = preprocessing.MinMaxScaler()
test = test[
["id", "sat_id", "x_sim", "y_sim", "z_sim", "Vx_sim", "Vy_sim", "Vz_sim"]
].copy()
X_test = min_max_scaler.fit_transform(test)
# Make predictions
y_pred_1 = regressor_1.predict(X_test)
y_pred_2 = regressor_2.predict(X_test)
y_pred_3 = regressor_3.predict(X_test)
y_pred_4 = regressor_4.predict(X_test)
y_pred_5 = regressor_5.predict(X_test)
y_pred_6 = regressor_6.predict(X_test)
# Calculate R-squared values
r2_1 = r2_score(key.iloc[:, 0], y_pred_1)
r2_2 = r2_score(key.iloc[:, 1], y_pred_2)
r2_3 = r2_score(key.iloc[:, 2], y_pred_3)
r2_4 = r2_score(key.iloc[:, 3], y_pred_4)
r2_5 = r2_score(key.iloc[:, 4], y_pred_5)
r2_6 = r2_score(key.iloc[:, 5], y_pred_6)
# Calculate overall R-squared value
overall_r2 = (r2_1 + r2_2 + r2_3 + r2_4 + r2_5 + r2_6) / 6
overall_r2
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/336/129336895.ipynb
|
predict-the-positions-and-speeds-of-600-satellites
|
idawoodjee
|
[{"Id": 129336895, "ScriptId": 38443191, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10896833, "CreationDate": "05/12/2023 23:06:08", "VersionNumber": 1.0, "Title": "Satallite Kinematics Prediction", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 90.0, "LinesInsertedFromPrevious": 90.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185285930, "KernelVersionId": 129336895, "SourceDatasetVersionId": 1554414}]
|
[{"Id": 1554414, "DatasetId": 917554, "DatasourceVersionId": 1589259, "CreatorUserId": 2124571, "LicenseName": "Other (specified in description)", "CreationDate": "10/12/2020 12:33:46", "VersionNumber": 1.0, "Title": "Predict the positions and speeds of 600 satellites", "Slug": "predict-the-positions-and-speeds-of-600-satellites", "Subtitle": "Use machine learning to predict 7-day trajectories of 600 satellites in orbit", "Description": "## **Background**\nWithin the past two decades, the number of resident space objects (RSOs - artificial objects that are in orbit around the Earth) has nearly doubled, from around 11000 objects in the year 2000 to around 19500 objects in 2019. This number is expected to rise even higher as more satellites are put into space, thanks to improvements in satellite technology and lower costs of production. On the other hand, the increase in the number of RSOs also indirectly increases the risk of collision between them. The important issue here is the reliable and accurate orbit tracking of satellites over sufficiently long periods of time. \n\nFailure to address this issue has led to incidents such as the collision between the active US Iridium-33 communication satellite, and the inactive Russian Kosmos-2251 communication satellite in February 2009. In fact, this accident increased the amount of space debris by 13%, as shown in the figure below:\n\n\n\nMore accidents will result in more debris being produced, and through a chain reaction of collisions (if left unchecked), may lead to a dire situation in which it becomes difficult or downright impossible to put a satellite into orbit due to the large accumulation of space debris surrounding the Earth. This scenario is known as the **Kessler Syndrome**. Thus, considering the gravity of the situation at hand, it is imperative to prevent such catastrophic collisions from ever happening again.\n\n## **Context**\nThe aim is to use machine learning or other forecasting algorithms to predict the positions and speeds of 600 satellites in orbit around the Earth. The original datasets were obtained from the **International Data Analytics Olympiad 2020 (IDAO 2020) Competition**, provided by the Russian Astronomical Science Centre. Being a competition, IDAO does not provide the answer key for their `test` dataset. However, I prepared their `train` dataset in such a way that testing one's algorithm can easily be done via the `answer_key`. This preparation is outlined in the notebook associated with this dataset (\"Data Preparation to Produce Derived Datasets\"). \n\nSatellite positions and speeds (henceforth, they will be collectively referred to as the **\"kinematic states\"**) can be measured using different methods, including simulations. In this dataset, there are two kinds of simulators: the precise simulator and the imprecise simulator. We refer to measurements made using the precise simulator as the **\"true\"** kinematic states of the satellite and measurements made using the imprecise simulator as the **\"simulated\"** kinematic states. \n\nThe **aim** is to make predictions for the true kinematic states of 600 satellites in the final 7 days of January 2014.\n\n## **Datasets Description**\n- The `jan_train` dataset contains data on **both true and simulated** kinematic states of 600 satellites for the 24-day period of time from 01-Jan-2014 00:00 to 24-Jan-2014 23:59. \n- The `jan_test` dataset contains data on **only the simulated** kinematic states of the same 600 satellites for the final 7-day period of time from 25-Jan-2014 00:00 to 30-Jan-2014 23:59. \n- The `answer_key` dataset contains data on **only the true** kinematic states of the same 600 satellites for the final 7-day period of time from 25-Jan-2014 00:00 to 30-Jan-2014 23:59. **NOTE:** The `answer_key` should only be used for the purpose of evaluation, **NOT** for training.\n\n## **Variables Description**\n- `id` (integer): unique row identifier\n- `epoch` (datetime): datetime (at the instant of measurement) in \"%Y-%m-%d %H:%M:%S.%f\" format (e.g. 2014-01-27 18:28:18.284)\n- `sat_id` (integer): unique satellite identifier, ranging from 0 to 599\n- `x`, `y`, `z` (float): the true position coordinates of a satellite (km)\n- `Vx`, `Vy`, `Vz` (float): the true speeds of a satellite, measured along the respective axes (km/s)\n- `x_sim`, `y_sim`, `z_sim` (float): the simulated position coordinates of a satellite (km)\n- `Vx_sim`, `Vy_sim`, `Vz_sim` (float): the simulated speeds of a satellite, measured along the respective axes (km/s)\n\n## **Tasks**\n1. Start by exploring the dataset as a whole, getting familiar with the data contained within.\n2. Choose one satellite to explore in more detail, looking at both true and simulated kinematic states\n3. Look at the periodicity and pattern of each kinematic state, and think about whether or not they will be useful in a predictive model.\n4. Develop a machine learning model to forecast over the final 7 days for each kinematic state of the chosen satellite.\n5. Repeat the process for the remaining 599 satellites, keeping in mind that there could be differences between them. Otherwise, choose the satellites that are most interesting or those that are quite different from each other.\n\n## **Acknowledgements**\nAll ownership belongs to the team, experts and organizers of the IDAO 2020 competition and the Russian Astronomical Science Centre.\n\nOriginal Owner of Datasets (IDAO 2020): https://idao.world/\nLicense (Yandex Disk): https://yandex.com/legal/disk_termsofuse/\nDatabase (where original data is stored): https://yadi.sk/d/0zYx00gSraxZ3w", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 917554, "CreatorUserId": 2124571, "OwnerUserId": 2124571.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1554414.0, "CurrentDatasourceVersionId": 1589259.0, "ForumId": 933384, "Type": 2, "CreationDate": "10/12/2020 12:33:46", "LastActivityDate": "10/12/2020", "TotalViews": 8394, "TotalDownloads": 535, "TotalVotes": 19, "TotalKernels": 4}]
|
[{"Id": 2124571, "UserName": "idawoodjee", "DisplayName": "Ismail Dawoodjee", "RegisterDate": "08/04/2018", "PerformanceTier": 0}]
|
import numpy as np
import cudf as pd
import seaborn as sns
import matplotlib.pyplot as plt
from cuml import preprocessing
from sklearn.metrics import precision_recall_fscore_support as score
from cuml.metrics import *
from cuml import *
from cuml.ensemble import RandomForestRegressor
from cuml.metrics import r2_score
train = pd.read_csv("jan_train.csv")
test = pd.read_csv("jan_test.csv")
key = pd.read_csv("answer_key.csv")
train.info()
train.epoch = pd.to_datetime(train.epoch)
sns.pairplot(train)
train.columns
X = train[
["id", "sat_id", "x_sim", "y_sim", "z_sim", "Vx_sim", "Vy_sim", "Vz_sim"]
].copy()
y = train[["x", "y", "z", "Vx", "Vy", "Vz"]]
min_max_scaler = preprocessing.MinMaxScaler()
X_scale = min_max_scaler.fit_transform(X)
X_scale
X_train, Y_train = X_scale, y
print(X_train.shape, Y_train.shape)
Y_train
Y_train_1 = Y_train.iloc[:, 0]
Y_train_2 = Y_train.iloc[:, 1]
Y_train_3 = Y_train.iloc[:, 2]
Y_train_4 = Y_train.iloc[:, 3]
Y_train_5 = Y_train.iloc[:, 4]
Y_train_6 = Y_train.iloc[:, 5]
Y_train_1
regressor_1 = RandomForestRegressor(n_estimators=100)
regressor_1.fit(X_train, Y_train_1)
regressor_2 = RandomForestRegressor(n_estimators=100)
regressor_2.fit(X_train, Y_train_2)
regressor_3 = RandomForestRegressor(n_estimators=100)
regressor_3.fit(X_train, Y_train_3)
regressor_4 = RandomForestRegressor(n_estimators=100)
regressor_4.fit(X_train, Y_train_4)
regressor_5 = RandomForestRegressor(n_estimators=100)
regressor_5.fit(X_train, Y_train_5)
regressor_6 = RandomForestRegressor(n_estimators=100)
regressor_6.fit(X_train, Y_train_6)
test
# Preprocess test data
min_max_scaler = preprocessing.MinMaxScaler()
test = test[
["id", "sat_id", "x_sim", "y_sim", "z_sim", "Vx_sim", "Vy_sim", "Vz_sim"]
].copy()
X_test = min_max_scaler.fit_transform(test)
# Make predictions
y_pred_1 = regressor_1.predict(X_test)
y_pred_2 = regressor_2.predict(X_test)
y_pred_3 = regressor_3.predict(X_test)
y_pred_4 = regressor_4.predict(X_test)
y_pred_5 = regressor_5.predict(X_test)
y_pred_6 = regressor_6.predict(X_test)
# Calculate R-squared values
r2_1 = r2_score(key.iloc[:, 0], y_pred_1)
r2_2 = r2_score(key.iloc[:, 1], y_pred_2)
r2_3 = r2_score(key.iloc[:, 2], y_pred_3)
r2_4 = r2_score(key.iloc[:, 3], y_pred_4)
r2_5 = r2_score(key.iloc[:, 4], y_pred_5)
r2_6 = r2_score(key.iloc[:, 5], y_pred_6)
# Calculate overall R-squared value
overall_r2 = (r2_1 + r2_2 + r2_3 + r2_4 + r2_5 + r2_6) / 6
overall_r2
| false | 0 | 1,026 | 1 | 2,528 | 1,026 |
||
129246530
|
<jupyter_start><jupyter_text>FIFA 1962
Kaggle dataset identifier: fifa-1962
<jupyter_script>import numpy as np # linear algebra
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("//kaggle/input/fifa-1962/DATASET/gate_applicants1.csv")
data
sns.barplot(x=data["Year"].value_counts().index, y=data["Year"].value_counts())
plt.show()
plt.pie(
data["Year"].value_counts(),
labels=data["Year"].value_counts().index,
autopct="%1.1f%%",
)
plt.show()
# DATA_FETCH : 2019
new = data[["Year", "Paper Name", "Paper Code", "Applied"]]
new
new_year = new[["Paper Name", "Paper Code", "Applied"]]
new_year_df = new.Year == 2019
final_year = new_year[new_year_df]
final_year_df = final_year.sort_values(by=["Applied"], ascending=False)
final_year_df
sns.barplot(y="Paper Name", x="Applied", data=final_year_df)
plt.show()
# so , we can say that ,, from above analysis ,, in 2019 ,, Students were taken part in the paper of Mechanical Engineering
# now further ,, i want to get data for 2017
data2 = new[["Paper Name", "Paper Code", "Applied"]]
data2_df = new.Year == 2017
data17 = data2[data2_df]
data17_df = data17.sort_values(by=["Applied"], ascending=False)
data17_df
# plotting Bar_plot For Above Dataset
sns.barplot(x="Paper Name", y="Applied", data=data17_df)
plt.xticks(rotation=90)
plt.show()
data3 = new[["Paper Name", "Paper Code", "Applied"]]
data3_df = new.Year == 2016
data16 = data3[data3_df]
data16_df = data16.sort_values(by=["Applied"], ascending=False)
data16_df
sns.barplot(x="Paper Name", y="Applied", data=data16_df)
plt.xticks(rotation=90)
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/246/129246530.ipynb
|
fifa-1962
|
bhoiarpan
|
[{"Id": 129246530, "ScriptId": 38425566, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8014201, "CreationDate": "05/12/2023 06:43:52", "VersionNumber": 2.0, "Title": "GATE_APPLICANT_DATASET_ANALYSIS", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 78.0, "LinesInsertedFromPrevious": 17.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 61.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185123018, "KernelVersionId": 129246530, "SourceDatasetVersionId": 3808866}]
|
[{"Id": 3808866, "DatasetId": 2269985, "DatasourceVersionId": 3863592, "CreatorUserId": 8014201, "LicenseName": "Unknown", "CreationDate": "06/15/2022 17:59:33", "VersionNumber": 1.0, "Title": "FIFA 1962", "Slug": "fifa-1962", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2269985, "CreatorUserId": 8014201, "OwnerUserId": 8014201.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3808866.0, "CurrentDatasourceVersionId": 3863592.0, "ForumId": 2296489, "Type": 2, "CreationDate": "06/15/2022 17:59:33", "LastActivityDate": "06/15/2022", "TotalViews": 270, "TotalDownloads": 2, "TotalVotes": 1, "TotalKernels": 1}]
|
[{"Id": 8014201, "UserName": "bhoiarpan", "DisplayName": "Bhoi Arpan", "RegisterDate": "07/30/2021", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
data = pd.read_csv("//kaggle/input/fifa-1962/DATASET/gate_applicants1.csv")
data
sns.barplot(x=data["Year"].value_counts().index, y=data["Year"].value_counts())
plt.show()
plt.pie(
data["Year"].value_counts(),
labels=data["Year"].value_counts().index,
autopct="%1.1f%%",
)
plt.show()
# DATA_FETCH : 2019
new = data[["Year", "Paper Name", "Paper Code", "Applied"]]
new
new_year = new[["Paper Name", "Paper Code", "Applied"]]
new_year_df = new.Year == 2019
final_year = new_year[new_year_df]
final_year_df = final_year.sort_values(by=["Applied"], ascending=False)
final_year_df
sns.barplot(y="Paper Name", x="Applied", data=final_year_df)
plt.show()
# so , we can say that ,, from above analysis ,, in 2019 ,, Students were taken part in the paper of Mechanical Engineering
# now further ,, i want to get data for 2017
data2 = new[["Paper Name", "Paper Code", "Applied"]]
data2_df = new.Year == 2017
data17 = data2[data2_df]
data17_df = data17.sort_values(by=["Applied"], ascending=False)
data17_df
# plotting Bar_plot For Above Dataset
sns.barplot(x="Paper Name", y="Applied", data=data17_df)
plt.xticks(rotation=90)
plt.show()
data3 = new[["Paper Name", "Paper Code", "Applied"]]
data3_df = new.Year == 2016
data16 = data3[data3_df]
data16_df = data16.sort_values(by=["Applied"], ascending=False)
data16_df
sns.barplot(x="Paper Name", y="Applied", data=data16_df)
plt.xticks(rotation=90)
plt.show()
| false | 1 | 731 | 0 | 759 | 731 |
||
129246693
|
<jupyter_start><jupyter_text>Smartphone Specifications and Prices in India
# Contex :
The dataset seems to contain information about different mobile phone models along with their specifications such as price, rating, sim type, processor, RAM, battery, display, camera, memory card support, and operating system.
Source: The csv data was scraped from https://www.smartprix.com/mobiles
Inspiration: You can build a recommender system or Price Prediction using the csv data.
Kaggle dataset identifier: smartphone-specifications-and-prices-in-india
<jupyter_script># ### Import the csv
import pandas as pd
df = pd.read_csv(
"/kaggle/input/smartphone-specifications-and-prices-in-india/smartphones - smartphones.csv"
)
# ### Remove faulty data
df.head()
df.count()
# remove null values from columns A and C
df.dropna(subset=["camera", "card", "os"], inplace=True)
df.count()
import re
import pandas as pd
def filter_dataframe(df, column, pattern):
mask = df[column].str.contains(pattern)
return df.loc[mask]
# Define the expected pattern for the battery information
regex_dict = {
"ram": r"\d+\u2009GB\sRAM,\s\d+\u2009GB\sinbuilt",
"battery": r"\d+\s*mAh\s+Battery",
"display": r"\d+\.\d+\sinches,\s\d+\u2009x\u2009\d+\u2009px",
"camera": r"^\d+\u2009MP.*Rear\s&\s\d+\u2009MP\sFront\sCamera$",
}
for column, pattern in regex_dict.items():
df = filter_dataframe(df, column, pattern)
# #### Testing each Columns individually
df.columns
uni_list = df["card"].unique()
print(uni_list)
# Define the expected pattern for the battery information
pattern = r"^Memory"
column = "card"
mask = df[column].str.contains(pattern)
# Filter out the rows that don't match the pattern
# df = df.loc[mask]
df[column].unique()
def map_os(os):
if pd.isna(os):
return "other"
elif "android" in os.lower():
return "android"
elif "ios" in os.lower():
return "ios"
else:
return "other"
# Apply the function to the os column and fill remaining null values with 'other'
df["os"] = df["os"].apply(map_os).fillna("other")
df["os"].value_counts()
def map_memory_card_support(text):
if pd.isna(text):
return "other"
elif "supported" in text.lower() or "hybrid" in text.lower():
return "Supported"
else:
return "Not Supported"
# Apply the function to the os column and fill remaining null values with 'other'
df["card"] = df["card"].apply(map_memory_card_support).fillna("other")
df["card"].value_counts()
df.count()
df.head()
# ### Extract New Columns
# brand, sim slot, sim generation, isVoLTE, hasIRBlaster, hasNFC, hasVo5g, processorManufacturer, processorModel,RAMSize, ROMSize, BatterySize, ChargingSpeed, displaySize, displayResolution, displayRefreshRate, BackCamera, FrontCamera, BackCameraSetup.
df["brand"] = df["model"].str.split().str[0]
df["proc_mod"] = df["processor"].str.extract(r"^([\w\s]+?),")
df.dropna(subset=["proc_mod"], inplace=True)
df["proc_man"] = df["proc_mod"].str.split().str[0]
# Define a function to extract the processor speed using regex
def extract_processor_speed(text):
pattern = r"\d+\.\d+\s*GHz" # regex pattern to match processor speed
match = re.search(pattern, text)
if match:
return match.group(0)
else:
return None
# Apply the function to extract the processor speed and add it as a new column in the DataFrame
df["proc_speed"] = df["processor"].apply(extract_processor_speed)
df.drop("processor", axis=1, inplace=True)
# Define a function to extract sim slot info
def get_sim_slots(row):
if "Dual" in row["sim"]:
return 2
else:
return 1
# Apply the function to create a new column
df["sim_slot"] = df.apply(get_sim_slots, axis=1)
# Create isVoLTE column
df["isVoLTE"] = df["sim"].apply(lambda x: 1 if "VoLTE" in x else 0)
# Create hasIRBlaster column
df["hasIRBlaster"] = df["sim"].apply(lambda x: 1 if "IR" in x else 0)
# Create NFC column
df["hasNFC"] = df["sim"].apply(lambda x: 1 if "NFC" in x else 0)
# Create 5G column
df["has5G"] = df["sim"].apply(lambda x: 1 if "5G" in x else 0)
df.drop("sim", axis=1, inplace=True)
# extract RAM and ROM sizes using regular expressions
df["ram_size"] = df["ram"].apply(
lambda x: int(re.search(r"(\d+)\s*GB\s*RAM", x).group(1))
)
df["rom_size"] = df["ram"].apply(
lambda x: int(re.search(r"(\d+)\s*GB\s*inbuilt", x).group(1))
)
df.drop("ram", axis=1, inplace=True)
def extract_charging_speed(text):
match = re.search(r"(\d+(\.\d+)?)W", text)
if match:
return float(match.group(1))
else:
return 10.0 # default value
df["charging_speed"] = df["battery"].apply(extract_charging_speed)
# Extract battery capacity using regular expression
df["battery_capacity"] = (
df["battery"].str.extract("(\d+)\s*mAh", expand=False).astype(int)
)
df.drop("battery", axis=1, inplace=True)
df.rename(columns={"price": "pr", "ram_size": "ram", "rom_size": "rom"}, inplace=True)
# function to extract display refresh rate from the display column
def extract_refresh_rate(text):
match = re.search(r"(\d+)\s*Hz", text)
if match:
return int(match.group(1))
else:
return 60 # default value
def get_display_resolution(s):
match = re.search(r"([\d.]+)\s*inches,\s*(\d+)\s*x\s*(\d+)\s*px", s)
if match:
res = int(match.group(2))
if res >= 2160:
return 2160
elif res >= 1440:
return 1440
elif res >= 1080:
return 1080
return 720
# apply the function to the display column and create a new column
df["display_refresh_rate"] = df["display"].apply(extract_refresh_rate)
# Apply regular expression pattern to extract display size
df["display_size"] = (
df["display"].str.extract(r"([\d.]+)\s*inches", expand=False).astype(float)
)
df["display_resolution"] = df["display"].apply(get_display_resolution)
df.drop("display", axis=1, inplace=True)
# Define function to extract maximum back camera resolution
def get_back_camera_resolution(s):
pattern = r"(\d+)\s*MP"
match = re.search(pattern, s)
if match:
return float(match.group(1))
else:
return None
def get_front_camera_resolution(s):
pattern = r"(\d+)\s*MP\s+Front"
match = re.search(pattern, s)
if match:
return float(match.group(1))
else:
return None
def get_back_camera_setup(s):
pattern = r"(\d+)\s*MP."
match = re.findall(pattern, s)
if match:
return len(match) - 1
else:
return 1
# Apply the function to the 'camera' column and create a new column called 'front_camera_resolution'
df["front_camera_resolution"] = df["camera"].apply(get_front_camera_resolution)
# Apply function to 'camera' column to create 'back_camera_resolution' column
df["back_camera_resolution"] = df["camera"].apply(get_back_camera_resolution)
# Apply function to 'camera' column to create 'back_camera_setup' column
df["back_camera_setup"] = df["camera"].apply(get_back_camera_setup)
df.drop("camera", axis=1, inplace=True)
# Define a function to extract the floating values from the proc_speed column
def extract_float(s):
if s is None:
return None
pattern = r"(\d+\.\d+)\s*GHz"
match = re.search(pattern, s)
if match:
return float(match.group(1))
else:
return 0
# Apply the function to the proc_speed column
df["proc_speed"] = df["proc_speed"].apply(extract_float)
df.count()
df["proc_mod"].unique()
df
# #### Dummy Variables & Feature Hashing
df.columns
df.rename(
columns={
"proc_mod": "processor_model",
"proc_man": "processor_manufacturer",
"proc_speed": "processor_speed",
},
inplace=True,
)
df = pd.get_dummies(df, columns=["card", "os"], drop_first=True)
# it is just converting all these textual attributes into some binary values
# drop first is true because we want to avoid dummy variable trap
df.head()
df["pr"].unique()
# extract the integer value from the pr column
df["pr"] = df["pr"].str.replace(",", "").str.extract(r"(\d+)").astype(int)
import joblib
# Used Label Encoding to encode categorical values
from sklearn.preprocessing import LabelEncoder
# define the columns to encode using label encoding
columns_to_encode = ["brand", "processor_model", "processor_manufacturer"]
# create a dictionary to store the trained label encoders
label_encoders = {}
# create a LabelEncoder object for each column
for col in columns_to_encode:
le = LabelEncoder()
df[col] = le.fit_transform(df[col].astype(str))
label_encoders[col] = le
# save the trained label encoders to a file
joblib.dump(label_encoders, "label_encoders.joblib")
# I'm exporting the label encoders because when I recieve new data I want use the same label encoders so that I don't have to
# retrain my model.
df.drop("model", axis=1, inplace=True)
# move column 'my_col' to the last position
price = df.pop("pr")
df.insert(len(df.columns), "price", price)
# reindex the dataframe from 0 to n
df = df.reset_index(drop=True)
# export the DataFrame to a CSV file
df.to_csv("smartphones_prepared_data.csv", index=False)
df
df.count()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/246/129246693.ipynb
|
smartphone-specifications-and-prices-in-india
|
shrutiambekar
|
[{"Id": 129246693, "ScriptId": 38392616, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14797692, "CreationDate": "05/12/2023 06:45:35", "VersionNumber": 5.0, "Title": "Prepared Features", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 268.0, "LinesInsertedFromPrevious": 1.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 267.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
|
[{"Id": 185123244, "KernelVersionId": 129246693, "SourceDatasetVersionId": 5390758}]
|
[{"Id": 5390758, "DatasetId": 3124911, "DatasourceVersionId": 5464470, "CreatorUserId": 11981028, "LicenseName": "CC0: Public Domain", "CreationDate": "04/13/2023 06:13:13", "VersionNumber": 1.0, "Title": "Smartphone Specifications and Prices in India", "Slug": "smartphone-specifications-and-prices-in-india", "Subtitle": "Dataset about Smartphones for Price Prediction", "Description": "# Contex :\nThe dataset seems to contain information about different mobile phone models along with their specifications such as price, rating, sim type, processor, RAM, battery, display, camera, memory card support, and operating system.\n\nSource: The csv data was scraped from https://www.smartprix.com/mobiles\n\nInspiration: You can build a recommender system or Price Prediction using the csv data.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3124911, "CreatorUserId": 11981028, "OwnerUserId": 11981028.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5390758.0, "CurrentDatasourceVersionId": 5464470.0, "ForumId": 3188482, "Type": 2, "CreationDate": "04/13/2023 06:13:13", "LastActivityDate": "04/13/2023", "TotalViews": 5616, "TotalDownloads": 1086, "TotalVotes": 29, "TotalKernels": 7}]
|
[{"Id": 11981028, "UserName": "shrutiambekar", "DisplayName": "shruti ambekar", "RegisterDate": "10/17/2022", "PerformanceTier": 2}]
|
# ### Import the csv
import pandas as pd
df = pd.read_csv(
"/kaggle/input/smartphone-specifications-and-prices-in-india/smartphones - smartphones.csv"
)
# ### Remove faulty data
df.head()
df.count()
# remove null values from columns A and C
df.dropna(subset=["camera", "card", "os"], inplace=True)
df.count()
import re
import pandas as pd
def filter_dataframe(df, column, pattern):
mask = df[column].str.contains(pattern)
return df.loc[mask]
# Define the expected pattern for the battery information
regex_dict = {
"ram": r"\d+\u2009GB\sRAM,\s\d+\u2009GB\sinbuilt",
"battery": r"\d+\s*mAh\s+Battery",
"display": r"\d+\.\d+\sinches,\s\d+\u2009x\u2009\d+\u2009px",
"camera": r"^\d+\u2009MP.*Rear\s&\s\d+\u2009MP\sFront\sCamera$",
}
for column, pattern in regex_dict.items():
df = filter_dataframe(df, column, pattern)
# #### Testing each Columns individually
df.columns
uni_list = df["card"].unique()
print(uni_list)
# Define the expected pattern for the battery information
pattern = r"^Memory"
column = "card"
mask = df[column].str.contains(pattern)
# Filter out the rows that don't match the pattern
# df = df.loc[mask]
df[column].unique()
def map_os(os):
if pd.isna(os):
return "other"
elif "android" in os.lower():
return "android"
elif "ios" in os.lower():
return "ios"
else:
return "other"
# Apply the function to the os column and fill remaining null values with 'other'
df["os"] = df["os"].apply(map_os).fillna("other")
df["os"].value_counts()
def map_memory_card_support(text):
if pd.isna(text):
return "other"
elif "supported" in text.lower() or "hybrid" in text.lower():
return "Supported"
else:
return "Not Supported"
# Apply the function to the os column and fill remaining null values with 'other'
df["card"] = df["card"].apply(map_memory_card_support).fillna("other")
df["card"].value_counts()
df.count()
df.head()
# ### Extract New Columns
# brand, sim slot, sim generation, isVoLTE, hasIRBlaster, hasNFC, hasVo5g, processorManufacturer, processorModel,RAMSize, ROMSize, BatterySize, ChargingSpeed, displaySize, displayResolution, displayRefreshRate, BackCamera, FrontCamera, BackCameraSetup.
df["brand"] = df["model"].str.split().str[0]
df["proc_mod"] = df["processor"].str.extract(r"^([\w\s]+?),")
df.dropna(subset=["proc_mod"], inplace=True)
df["proc_man"] = df["proc_mod"].str.split().str[0]
# Define a function to extract the processor speed using regex
def extract_processor_speed(text):
pattern = r"\d+\.\d+\s*GHz" # regex pattern to match processor speed
match = re.search(pattern, text)
if match:
return match.group(0)
else:
return None
# Apply the function to extract the processor speed and add it as a new column in the DataFrame
df["proc_speed"] = df["processor"].apply(extract_processor_speed)
df.drop("processor", axis=1, inplace=True)
# Define a function to extract sim slot info
def get_sim_slots(row):
if "Dual" in row["sim"]:
return 2
else:
return 1
# Apply the function to create a new column
df["sim_slot"] = df.apply(get_sim_slots, axis=1)
# Create isVoLTE column
df["isVoLTE"] = df["sim"].apply(lambda x: 1 if "VoLTE" in x else 0)
# Create hasIRBlaster column
df["hasIRBlaster"] = df["sim"].apply(lambda x: 1 if "IR" in x else 0)
# Create NFC column
df["hasNFC"] = df["sim"].apply(lambda x: 1 if "NFC" in x else 0)
# Create 5G column
df["has5G"] = df["sim"].apply(lambda x: 1 if "5G" in x else 0)
df.drop("sim", axis=1, inplace=True)
# extract RAM and ROM sizes using regular expressions
df["ram_size"] = df["ram"].apply(
lambda x: int(re.search(r"(\d+)\s*GB\s*RAM", x).group(1))
)
df["rom_size"] = df["ram"].apply(
lambda x: int(re.search(r"(\d+)\s*GB\s*inbuilt", x).group(1))
)
df.drop("ram", axis=1, inplace=True)
def extract_charging_speed(text):
match = re.search(r"(\d+(\.\d+)?)W", text)
if match:
return float(match.group(1))
else:
return 10.0 # default value
df["charging_speed"] = df["battery"].apply(extract_charging_speed)
# Extract battery capacity using regular expression
df["battery_capacity"] = (
df["battery"].str.extract("(\d+)\s*mAh", expand=False).astype(int)
)
df.drop("battery", axis=1, inplace=True)
df.rename(columns={"price": "pr", "ram_size": "ram", "rom_size": "rom"}, inplace=True)
# function to extract display refresh rate from the display column
def extract_refresh_rate(text):
match = re.search(r"(\d+)\s*Hz", text)
if match:
return int(match.group(1))
else:
return 60 # default value
def get_display_resolution(s):
match = re.search(r"([\d.]+)\s*inches,\s*(\d+)\s*x\s*(\d+)\s*px", s)
if match:
res = int(match.group(2))
if res >= 2160:
return 2160
elif res >= 1440:
return 1440
elif res >= 1080:
return 1080
return 720
# apply the function to the display column and create a new column
df["display_refresh_rate"] = df["display"].apply(extract_refresh_rate)
# Apply regular expression pattern to extract display size
df["display_size"] = (
df["display"].str.extract(r"([\d.]+)\s*inches", expand=False).astype(float)
)
df["display_resolution"] = df["display"].apply(get_display_resolution)
df.drop("display", axis=1, inplace=True)
# Define function to extract maximum back camera resolution
def get_back_camera_resolution(s):
pattern = r"(\d+)\s*MP"
match = re.search(pattern, s)
if match:
return float(match.group(1))
else:
return None
def get_front_camera_resolution(s):
pattern = r"(\d+)\s*MP\s+Front"
match = re.search(pattern, s)
if match:
return float(match.group(1))
else:
return None
def get_back_camera_setup(s):
pattern = r"(\d+)\s*MP."
match = re.findall(pattern, s)
if match:
return len(match) - 1
else:
return 1
# Apply the function to the 'camera' column and create a new column called 'front_camera_resolution'
df["front_camera_resolution"] = df["camera"].apply(get_front_camera_resolution)
# Apply function to 'camera' column to create 'back_camera_resolution' column
df["back_camera_resolution"] = df["camera"].apply(get_back_camera_resolution)
# Apply function to 'camera' column to create 'back_camera_setup' column
df["back_camera_setup"] = df["camera"].apply(get_back_camera_setup)
df.drop("camera", axis=1, inplace=True)
# Define a function to extract the floating values from the proc_speed column
def extract_float(s):
if s is None:
return None
pattern = r"(\d+\.\d+)\s*GHz"
match = re.search(pattern, s)
if match:
return float(match.group(1))
else:
return 0
# Apply the function to the proc_speed column
df["proc_speed"] = df["proc_speed"].apply(extract_float)
df.count()
df["proc_mod"].unique()
df
# #### Dummy Variables & Feature Hashing
df.columns
df.rename(
columns={
"proc_mod": "processor_model",
"proc_man": "processor_manufacturer",
"proc_speed": "processor_speed",
},
inplace=True,
)
df = pd.get_dummies(df, columns=["card", "os"], drop_first=True)
# it is just converting all these textual attributes into some binary values
# drop first is true because we want to avoid dummy variable trap
df.head()
df["pr"].unique()
# extract the integer value from the pr column
df["pr"] = df["pr"].str.replace(",", "").str.extract(r"(\d+)").astype(int)
import joblib
# Used Label Encoding to encode categorical values
from sklearn.preprocessing import LabelEncoder
# define the columns to encode using label encoding
columns_to_encode = ["brand", "processor_model", "processor_manufacturer"]
# create a dictionary to store the trained label encoders
label_encoders = {}
# create a LabelEncoder object for each column
for col in columns_to_encode:
le = LabelEncoder()
df[col] = le.fit_transform(df[col].astype(str))
label_encoders[col] = le
# save the trained label encoders to a file
joblib.dump(label_encoders, "label_encoders.joblib")
# I'm exporting the label encoders because when I recieve new data I want use the same label encoders so that I don't have to
# retrain my model.
df.drop("model", axis=1, inplace=True)
# move column 'my_col' to the last position
price = df.pop("pr")
df.insert(len(df.columns), "price", price)
# reindex the dataframe from 0 to n
df = df.reset_index(drop=True)
# export the DataFrame to a CSV file
df.to_csv("smartphones_prepared_data.csv", index=False)
df
df.count()
| false | 1 | 2,725 | 1 | 2,859 | 2,725 |
||
129246923
|
<jupyter_start><jupyter_text>Sunspots
#Context
Sunspots are temporary phenomena on the Sun's photosphere that appear as spots darker than the surrounding areas. They are regions of reduced surface temperature caused by concentrations of magnetic field flux that inhibit convection. Sunspots usually appear in pairs of opposite magnetic polarity. Their number varies according to the approximately 11-year solar cycle.
Source: https://en.wikipedia.org/wiki/Sunspot
#Content :
Monthly Mean Total Sunspot Number, from 1749/01/01 to 2017/08/31
#Acknowledgements :
SIDC and Quandl.
Database from SIDC - Solar Influences Data Analysis Center - the solar physics research department of the Royal Observatory of Belgium. [SIDC website][1]
[1]: http://sidc.oma.be/
Kaggle dataset identifier: sunspots
<jupyter_code>import pandas as pd
df = pd.read_csv('sunspots/Sunspots.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 3265 entries, 0 to 3264
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 3265 non-null int64
1 Date 3265 non-null object
2 Monthly Mean Total Sunspot Number 3265 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 76.6+ KB
<jupyter_text>Examples:
{
"Unnamed: 0": 0,
"Date": "1749-01-31 00:00:00",
"Monthly Mean Total Sunspot Number": 96.7
}
{
"Unnamed: 0": 1,
"Date": "1749-02-28 00:00:00",
"Monthly Mean Total Sunspot Number": 104.3
}
{
"Unnamed: 0": 2,
"Date": "1749-03-31 00:00:00",
"Monthly Mean Total Sunspot Number": 116.7
}
{
"Unnamed: 0": 3,
"Date": "1749-04-30 00:00:00",
"Monthly Mean Total Sunspot Number": 92.8
}
<jupyter_script># ## Table of Contents
# 1. [Introduction](#Introduction)
# 2. [Imports](#Imports)
# 3. [Utilities](#Utilities)
# 4. [Extract Data](#Extract_Data)
# 5. [Split the Dataset](#Split_the_Dataset)
# 6. [Prepare Features and Labels](#Prepare_Features_and_Labels)
# 7. [Build and Compile the Model](#Build_and_Compile_the_Model)
# 8. [Model Evaluation](#Model_Evaluation)
# # 1. Introduction
# ### Monthly Mean Total Sunspot Number, from 1749/01/01 to 2017/08/31
# 
# ### In this notebook, we will predict "Monthly Mean Total Sunspot Number" through a simple Dense (DNN) model.
# # 2. Imports
# import libarary
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
import math
from keras.models import Sequential
from keras.layers import Dense
# # 3. Utilities
def plot_series(
x,
y,
format="-",
start=0,
end=None,
title=None,
xlabel=None,
ylabel=None,
legend=None,
):
"""
Visualizes time series data
Args:
x (array of int) - contains values for the x-axis
y (array of int or tuple of arrays) - contains the values for the y-axis
format (string) - line style when plotting the graph
label (string) - tag for the line
start (int) - first time step to plot
end (int) - last time step to plot
title (string) - title of the plot
xlabel (string) - label for the x-axis
ylabel (string) - label for the y-axis
legend (list of strings) - legend for the plot
"""
# Setup dimensions of the graph figure
plt.figure(figsize=(10, 6))
# Check if there are more than two series to plot
if type(y) is tuple:
# Loop over the y elements
for y_curr in y:
# Plot the x and current y values
plt.plot(x[start:end], y_curr[start:end], format)
else:
# Plot the x and y values
plt.plot(x[start:end], y[start:end], format)
# Label the x-axis
plt.xlabel(xlabel)
# Label the y-axis
plt.ylabel(ylabel)
# Set the legend
if legend:
plt.legend(legend)
# Set the title
plt.title(title)
# Overlay a grid on the graph
plt.grid(True)
# Draw the graph on screen
plt.show()
# # 4. Extract Data
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# https://www.kaggle.com/datasets/robervalt/sunspots
df = pd.read_csv("/kaggle/input/sunspots/Sunspots.csv")
print(df.shape)
df
df["Date"] = df["Date"].astype("datetime64[ns]")
time = df["Date"]
series = df["Monthly Mean Total Sunspot Number"]
# Preview the data
plot_series(time, series, xlabel="Month", ylabel="Monthly Mean Total Sunspot Number")
# # 5. Split the Dataset
# Define the split time
split_time = int(time.shape[0] * 0.7)
# Get the train set
time_train = time[:split_time]
train = series[:split_time]
# Get the validation set
time_test = time[split_time:]
test = series[split_time:]
# Plot the train set
plot_series(time_train, train)
# # 6. Prepare Features and Labels
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
"""Generates dataset windows
Args:
series (array of float) - contains the values of the time series
window_size (int) - the number of time steps to include in the feature
batch_size (int) - the batch size
shuffle_buffer(int) - buffer size to use for the shuffle method
Returns:
dataset (TF Dataset) - TF Dataset containing time windows
"""
# Generate a TF Dataset from the series values
dataset = tf.data.Dataset.from_tensor_slices(series)
# Window the data but only take those with the specified size
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
# Flatten the windows by putting its elements in a single batch
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
# Create tuples with features and labels
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
# Shuffle the windows
dataset = dataset.shuffle(shuffle_buffer)
# Create batches of windows
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
# Parameters
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
# Generate the dataset windows
train_set = windowed_dataset(train, window_size, batch_size, shuffle_buffer_size)
# Print properties of a single batch
for windows in train_set.take(1):
print(f"data type: {type(windows)}")
print(f"number of elements in the tuple: {len(windows)}")
print(f"shape of first element: {windows[0].shape}")
print(f"shape of second element: {windows[1].shape}")
# # 7. Build and Compile the Model
# # Build the model: Dense
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(10, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
]
)
print(model.summary())
learning_rate = 1e-5
optimizer = tf.keras.optimizers.SGD(momentum=0.9, learning_rate=learning_rate)
model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics="mae")
history = model.fit(train_set, epochs=100, verbose=0)
df_history = pd.DataFrame(history.history)
pd.reset_option("display.float_format")
df_history = df_history.sort_values("loss")
df_history.head(1)
# # 8. Model Evaluation
def model_forecast(model, series, window_size, batch_size):
"""Uses an input model to generate predictions on data windows
Args:
model (TF Keras Model) - model that accepts data windows
series (array of float) - contains the values of the time series
window_size (int) - the number of time steps to include in the window
batch_size (int) - the batch size
Returns:
forecast (numpy array) - array containing predictions
"""
# Generate a TF Dataset from the series values
dataset = tf.data.Dataset.from_tensor_slices(series)
# Window the data but only take those with the specified size
dataset = dataset.window(window_size, shift=1, drop_remainder=True)
# Flatten the windows by putting its elements in a single batch
dataset = dataset.flat_map(lambda w: w.batch(window_size))
# Create batches of windows
dataset = dataset.batch(batch_size).prefetch(1)
# Get predictions on the entire dataset
forecast = model.predict(dataset)
return forecast
# Reduce the original series
forecast_series = series[split_time - window_size : -1]
# Use helper function to generate predictions
forecast = model_forecast(model, forecast_series, window_size, batch_size)
# Drop single dimensional axis
results = forecast.squeeze()
# Plot the results
plot_series(time_test, (test, results))
# Compute the MAE, RMSE, R2-Score
print("MAE: ", mean_absolute_error(test, results))
print("RMSE: ", math.sqrt(mean_squared_error(test, results)))
print("R2-Score: ", r2_score(test, results))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/246/129246923.ipynb
|
sunspots
|
robervalt
|
[{"Id": 129246923, "ScriptId": 38402842, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13057935, "CreationDate": "05/12/2023 06:47:41", "VersionNumber": 4.0, "Title": "Sun Spots Prediction - Simple DNN", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 252.0, "LinesInsertedFromPrevious": 7.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 245.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 185123626, "KernelVersionId": 129246923, "SourceDatasetVersionId": 1932014}]
|
[{"Id": 1932014, "DatasetId": 2418, "DatasourceVersionId": 1970611, "CreatorUserId": 958985, "LicenseName": "CC0: Public Domain", "CreationDate": "02/11/2021 13:53:52", "VersionNumber": 3.0, "Title": "Sunspots", "Slug": "sunspots", "Subtitle": "Monthly Mean Total Sunspot Number - form 1749 to july 2018", "Description": "#Context\n\nSunspots are temporary phenomena on the Sun's photosphere that appear as spots darker than the surrounding areas. They are regions of reduced surface temperature caused by concentrations of magnetic field flux that inhibit convection. Sunspots usually appear in pairs of opposite magnetic polarity. Their number varies according to the approximately 11-year solar cycle.\n\nSource: https://en.wikipedia.org/wiki/Sunspot\n\n#Content :\n\nMonthly Mean Total Sunspot Number, from 1749/01/01 to 2017/08/31\n\n#Acknowledgements :\n\nSIDC and Quandl.\n\nDatabase from SIDC - Solar Influences Data Analysis Center - the solar physics research department of the Royal Observatory of Belgium. [SIDC website][1]\n\n [1]: http://sidc.oma.be/", "VersionNotes": "Up to 2021-01-31", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2418, "CreatorUserId": 958985, "OwnerUserId": 958985.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1932014.0, "CurrentDatasourceVersionId": 1970611.0, "ForumId": 6460, "Type": 2, "CreationDate": "09/11/2017 14:44:50", "LastActivityDate": "02/06/2018", "TotalViews": 61533, "TotalDownloads": 8993, "TotalVotes": 143, "TotalKernels": 74}]
|
[{"Id": 958985, "UserName": "robervalt", "DisplayName": "Especuloide", "RegisterDate": "03/10/2017", "PerformanceTier": 0}]
|
# ## Table of Contents
# 1. [Introduction](#Introduction)
# 2. [Imports](#Imports)
# 3. [Utilities](#Utilities)
# 4. [Extract Data](#Extract_Data)
# 5. [Split the Dataset](#Split_the_Dataset)
# 6. [Prepare Features and Labels](#Prepare_Features_and_Labels)
# 7. [Build and Compile the Model](#Build_and_Compile_the_Model)
# 8. [Model Evaluation](#Model_Evaluation)
# # 1. Introduction
# ### Monthly Mean Total Sunspot Number, from 1749/01/01 to 2017/08/31
# 
# ### In this notebook, we will predict "Monthly Mean Total Sunspot Number" through a simple Dense (DNN) model.
# # 2. Imports
# import libarary
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
import math
from keras.models import Sequential
from keras.layers import Dense
# # 3. Utilities
def plot_series(
x,
y,
format="-",
start=0,
end=None,
title=None,
xlabel=None,
ylabel=None,
legend=None,
):
"""
Visualizes time series data
Args:
x (array of int) - contains values for the x-axis
y (array of int or tuple of arrays) - contains the values for the y-axis
format (string) - line style when plotting the graph
label (string) - tag for the line
start (int) - first time step to plot
end (int) - last time step to plot
title (string) - title of the plot
xlabel (string) - label for the x-axis
ylabel (string) - label for the y-axis
legend (list of strings) - legend for the plot
"""
# Setup dimensions of the graph figure
plt.figure(figsize=(10, 6))
# Check if there are more than two series to plot
if type(y) is tuple:
# Loop over the y elements
for y_curr in y:
# Plot the x and current y values
plt.plot(x[start:end], y_curr[start:end], format)
else:
# Plot the x and y values
plt.plot(x[start:end], y[start:end], format)
# Label the x-axis
plt.xlabel(xlabel)
# Label the y-axis
plt.ylabel(ylabel)
# Set the legend
if legend:
plt.legend(legend)
# Set the title
plt.title(title)
# Overlay a grid on the graph
plt.grid(True)
# Draw the graph on screen
plt.show()
# # 4. Extract Data
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# https://www.kaggle.com/datasets/robervalt/sunspots
df = pd.read_csv("/kaggle/input/sunspots/Sunspots.csv")
print(df.shape)
df
df["Date"] = df["Date"].astype("datetime64[ns]")
time = df["Date"]
series = df["Monthly Mean Total Sunspot Number"]
# Preview the data
plot_series(time, series, xlabel="Month", ylabel="Monthly Mean Total Sunspot Number")
# # 5. Split the Dataset
# Define the split time
split_time = int(time.shape[0] * 0.7)
# Get the train set
time_train = time[:split_time]
train = series[:split_time]
# Get the validation set
time_test = time[split_time:]
test = series[split_time:]
# Plot the train set
plot_series(time_train, train)
# # 6. Prepare Features and Labels
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
"""Generates dataset windows
Args:
series (array of float) - contains the values of the time series
window_size (int) - the number of time steps to include in the feature
batch_size (int) - the batch size
shuffle_buffer(int) - buffer size to use for the shuffle method
Returns:
dataset (TF Dataset) - TF Dataset containing time windows
"""
# Generate a TF Dataset from the series values
dataset = tf.data.Dataset.from_tensor_slices(series)
# Window the data but only take those with the specified size
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
# Flatten the windows by putting its elements in a single batch
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
# Create tuples with features and labels
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
# Shuffle the windows
dataset = dataset.shuffle(shuffle_buffer)
# Create batches of windows
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
# Parameters
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
# Generate the dataset windows
train_set = windowed_dataset(train, window_size, batch_size, shuffle_buffer_size)
# Print properties of a single batch
for windows in train_set.take(1):
print(f"data type: {type(windows)}")
print(f"number of elements in the tuple: {len(windows)}")
print(f"shape of first element: {windows[0].shape}")
print(f"shape of second element: {windows[1].shape}")
# # 7. Build and Compile the Model
# # Build the model: Dense
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(10, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
]
)
print(model.summary())
learning_rate = 1e-5
optimizer = tf.keras.optimizers.SGD(momentum=0.9, learning_rate=learning_rate)
model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics="mae")
history = model.fit(train_set, epochs=100, verbose=0)
df_history = pd.DataFrame(history.history)
pd.reset_option("display.float_format")
df_history = df_history.sort_values("loss")
df_history.head(1)
# # 8. Model Evaluation
def model_forecast(model, series, window_size, batch_size):
"""Uses an input model to generate predictions on data windows
Args:
model (TF Keras Model) - model that accepts data windows
series (array of float) - contains the values of the time series
window_size (int) - the number of time steps to include in the window
batch_size (int) - the batch size
Returns:
forecast (numpy array) - array containing predictions
"""
# Generate a TF Dataset from the series values
dataset = tf.data.Dataset.from_tensor_slices(series)
# Window the data but only take those with the specified size
dataset = dataset.window(window_size, shift=1, drop_remainder=True)
# Flatten the windows by putting its elements in a single batch
dataset = dataset.flat_map(lambda w: w.batch(window_size))
# Create batches of windows
dataset = dataset.batch(batch_size).prefetch(1)
# Get predictions on the entire dataset
forecast = model.predict(dataset)
return forecast
# Reduce the original series
forecast_series = series[split_time - window_size : -1]
# Use helper function to generate predictions
forecast = model_forecast(model, forecast_series, window_size, batch_size)
# Drop single dimensional axis
results = forecast.squeeze()
# Plot the results
plot_series(time_test, (test, results))
# Compute the MAE, RMSE, R2-Score
print("MAE: ", mean_absolute_error(test, results))
print("RMSE: ", math.sqrt(mean_squared_error(test, results)))
print("R2-Score: ", r2_score(test, results))
|
[{"sunspots/Sunspots.csv": {"column_names": "[\"Unnamed: 0\", \"Date\", \"Monthly Mean Total Sunspot Number\"]", "column_data_types": "{\"Unnamed: 0\": \"int64\", \"Date\": \"object\", \"Monthly Mean Total Sunspot Number\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 3265 entries, 0 to 3264\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 3265 non-null int64 \n 1 Date 3265 non-null object \n 2 Monthly Mean Total Sunspot Number 3265 non-null float64\ndtypes: float64(1), int64(1), object(1)\nmemory usage: 76.6+ KB\n", "summary": "{\"Unnamed: 0\": {\"count\": 3265.0, \"mean\": 1632.0, \"std\": 942.6686409691725, \"min\": 0.0, \"25%\": 816.0, \"50%\": 1632.0, \"75%\": 2448.0, \"max\": 3264.0}, \"Monthly Mean Total Sunspot Number\": {\"count\": 3265.0, \"mean\": 81.77877488514548, \"std\": 67.88927651806058, \"min\": 0.0, \"25%\": 23.9, \"50%\": 67.2, \"75%\": 122.5, \"max\": 398.2}}", "examples": "{\"Unnamed: 0\":{\"0\":0,\"1\":1,\"2\":2,\"3\":3},\"Date\":{\"0\":\"1749-01-31\",\"1\":\"1749-02-28\",\"2\":\"1749-03-31\",\"3\":\"1749-04-30\"},\"Monthly Mean Total Sunspot Number\":{\"0\":96.7,\"1\":104.3,\"2\":116.7,\"3\":92.8}}"}}]
| true | 1 |
<start_data_description><data_path>sunspots/Sunspots.csv:
<column_names>
['Unnamed: 0', 'Date', 'Monthly Mean Total Sunspot Number']
<column_types>
{'Unnamed: 0': 'int64', 'Date': 'object', 'Monthly Mean Total Sunspot Number': 'float64'}
<dataframe_Summary>
{'Unnamed: 0': {'count': 3265.0, 'mean': 1632.0, 'std': 942.6686409691725, 'min': 0.0, '25%': 816.0, '50%': 1632.0, '75%': 2448.0, 'max': 3264.0}, 'Monthly Mean Total Sunspot Number': {'count': 3265.0, 'mean': 81.77877488514548, 'std': 67.88927651806058, 'min': 0.0, '25%': 23.9, '50%': 67.2, '75%': 122.5, 'max': 398.2}}
<dataframe_info>
RangeIndex: 3265 entries, 0 to 3264
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 3265 non-null int64
1 Date 3265 non-null object
2 Monthly Mean Total Sunspot Number 3265 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 76.6+ KB
<some_examples>
{'Unnamed: 0': {'0': 0, '1': 1, '2': 2, '3': 3}, 'Date': {'0': '1749-01-31', '1': '1749-02-28', '2': '1749-03-31', '3': '1749-04-30'}, 'Monthly Mean Total Sunspot Number': {'0': 96.7, '1': 104.3, '2': 116.7, '3': 92.8}}
<end_description>
| 2,093 | 3 | 2,715 | 2,093 |
129246613
|
# # Project Blood Cell Detection
# # Import librarys
import pandas as pd # handling manipulation of data to get size of images and count them
import numpy as np
import matplotlib.pyplot as plt # handle showing images
import seaborn as sns
sns.set(
style="whitegrid"
) # handling style of showing images and graphics in matplotlib
import os # handling folders & read and remove folders
import glob as gb # handle pathes of folders & return content
import cv2 # handling images and resizing
import tensorflow as tf
from tensorflow import keras
from keras.layers import LeakyReLU
# ## loading data
### for Jupyter
train_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TRAIN/"
test_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TEST/"
# ## open path for train data
# dictionary of 4 classes in dataset
encode = {"EOSINOPHIL": 0, "LYMPHOCYTE": 1, "MONOCYTE": 2, "NEUTROPHIL": 3}
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
count = 0
for img in os.listdir(folder_path):
folder_path_img = os.path.join(folder_path, img)
count += 1
print(f"Training_data , there is {count} in folder {folder}")
# ## open path for test data
for folder in os.listdir(test_path):
folder_path = os.path.join(test_path, folder)
count = 0
for img in os.listdir(folder_path):
folder_path_img = os.path.join(folder_path, img)
count += 1
print(f"Testing_data , there is {count} in folder {folder}")
# need to heck the images sizes
def get_encode(n):
for x, y in encode.items():
if n == y:
return x
# print (encode.items())
# # images sizes in train folder
#
size = []
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
# print(img)
image = plt.imread(img)
size.append(image.shape)
# count each similar size contained in size list to know most common size in images
pd.Series(size).value_counts()
# # images sizes in test folder
# images sizes in test folder
size = []
for folder in os.listdir(test_path):
folder_path = os.path.join(test_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
# print(img)
image = plt.imread(img)
size.append(image.shape)
# count each similar size contained in size list to know most common size in images
pd.Series(size).value_counts()
# ## read image
s = 100
X_train = []
y_train = []
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
image = cv2.imread(img)
# using cv2.resize without determine interpolation make it preserve aspect ratio for each image
image_array = cv2.resize(image, (s, s))
X_train.append(list(image_array))
y_train.append(encode[folder])
print(f"count of x_train {len(X_train)} items in X_train")
# ## train image
# n : represent index of subplot started by default with 0 from enumerate()
# i : represent index of images we will choose from randint()
# (20*20) inches of figure
plt.figure(figsize=(20, 20))
for n, i in enumerate(list(np.random.randint(0, len(X_train), 49))):
plt.subplot(7, 7, n + 1)
plt.imshow(X_train[i])
plt.axis("off")
plt.title(get_encode(y_train[i]))
# now for test
s = 100
X_test = []
y_test = []
for folder in os.listdir(test_path):
folder_path = os.path.join(test_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
image = cv2.imread(img)
# using cv2.resize without determine interpolation make it preserve aspect ratio for each image
image_array = cv2.resize(image, (s, s))
X_test.append(list(image_array))
y_test.append(encode[folder])
print(f"count of x_test {len(X_test)} items in X_test")
# ## test image
plt.figure(figsize=(20, 20))
for n, i in enumerate(list(np.random.randint(0, len(X_test), 49))):
plt.subplot(7, 7, n + 1)
plt.imshow(X_test[i])
plt.axis("off")
plt.title(get_encode(y_train[i]))
# ## Building Model
# building model
X_train = np.array(X_train)
X_test = np.array(X_test)
y_train = np.array(y_train)
y_test = np.array(y_test)
print(f"X_train shape is {X_train.shape}")
print(f"X_test shape is {X_test.shape}")
print(f"y_train shape is {y_train.shape}")
print(f"y_test shape is {y_test.shape}")
# ## Normalize data
X_train = X_train / 255
X_test = X_test / 255
KerasModel = keras.models.Sequential(
[
keras.layers.Conv2D(
32, kernel_size=(5, 5), activation="relu", input_shape=(100, 100, 3)
),
keras.layers.MaxPool2D(2, 2),
keras.layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
keras.layers.MaxPool2D(2, 2),
keras.layers.Conv2D(128, kernel_size=(3, 3), activation="relu"),
keras.layers.MaxPool2D(2, 2),
keras.layers.BatchNormalization(),
keras.layers.Flatten(),
keras.layers.Dense(128, activation="leaky_relu"),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dropout(rate=0.4),
keras.layers.Dense(32, activation="relu"),
keras.layers.Dropout(rate=0.4),
keras.layers.Dense(4, activation="softmax"),
]
)
KerasModel.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
print("Model Details are : ")
print(KerasModel.summary())
ThisModel = KerasModel.fit(X_train, y_train, epochs=10, verbose=1)
ModelLoss, ModelAccuracy = KerasModel.evaluate(X_test, y_test)
print("Test Loss is {}".format(ModelLoss))
print("Test Accuracy is {}".format(ModelAccuracy))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/246/129246613.ipynb
| null | null |
[{"Id": 129246613, "ScriptId": 38200558, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14383116, "CreationDate": "05/12/2023 06:44:40", "VersionNumber": 1.0, "Title": "notebook3fd1501416", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 212.0, "LinesInsertedFromPrevious": 212.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Project Blood Cell Detection
# # Import librarys
import pandas as pd # handling manipulation of data to get size of images and count them
import numpy as np
import matplotlib.pyplot as plt # handle showing images
import seaborn as sns
sns.set(
style="whitegrid"
) # handling style of showing images and graphics in matplotlib
import os # handling folders & read and remove folders
import glob as gb # handle pathes of folders & return content
import cv2 # handling images and resizing
import tensorflow as tf
from tensorflow import keras
from keras.layers import LeakyReLU
# ## loading data
### for Jupyter
train_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TRAIN/"
test_path = "/kaggle/input/blood-cells/dataset2-master/dataset2-master/images/TEST/"
# ## open path for train data
# dictionary of 4 classes in dataset
encode = {"EOSINOPHIL": 0, "LYMPHOCYTE": 1, "MONOCYTE": 2, "NEUTROPHIL": 3}
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
count = 0
for img in os.listdir(folder_path):
folder_path_img = os.path.join(folder_path, img)
count += 1
print(f"Training_data , there is {count} in folder {folder}")
# ## open path for test data
for folder in os.listdir(test_path):
folder_path = os.path.join(test_path, folder)
count = 0
for img in os.listdir(folder_path):
folder_path_img = os.path.join(folder_path, img)
count += 1
print(f"Testing_data , there is {count} in folder {folder}")
# need to heck the images sizes
def get_encode(n):
for x, y in encode.items():
if n == y:
return x
# print (encode.items())
# # images sizes in train folder
#
size = []
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
# print(img)
image = plt.imread(img)
size.append(image.shape)
# count each similar size contained in size list to know most common size in images
pd.Series(size).value_counts()
# # images sizes in test folder
# images sizes in test folder
size = []
for folder in os.listdir(test_path):
folder_path = os.path.join(test_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
# print(img)
image = plt.imread(img)
size.append(image.shape)
# count each similar size contained in size list to know most common size in images
pd.Series(size).value_counts()
# ## read image
s = 100
X_train = []
y_train = []
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
image = cv2.imread(img)
# using cv2.resize without determine interpolation make it preserve aspect ratio for each image
image_array = cv2.resize(image, (s, s))
X_train.append(list(image_array))
y_train.append(encode[folder])
print(f"count of x_train {len(X_train)} items in X_train")
# ## train image
# n : represent index of subplot started by default with 0 from enumerate()
# i : represent index of images we will choose from randint()
# (20*20) inches of figure
plt.figure(figsize=(20, 20))
for n, i in enumerate(list(np.random.randint(0, len(X_train), 49))):
plt.subplot(7, 7, n + 1)
plt.imshow(X_train[i])
plt.axis("off")
plt.title(get_encode(y_train[i]))
# now for test
s = 100
X_test = []
y_test = []
for folder in os.listdir(test_path):
folder_path = os.path.join(test_path, folder)
for img in os.listdir(folder_path):
img = os.path.join(folder_path, img)
image = cv2.imread(img)
# using cv2.resize without determine interpolation make it preserve aspect ratio for each image
image_array = cv2.resize(image, (s, s))
X_test.append(list(image_array))
y_test.append(encode[folder])
print(f"count of x_test {len(X_test)} items in X_test")
# ## test image
plt.figure(figsize=(20, 20))
for n, i in enumerate(list(np.random.randint(0, len(X_test), 49))):
plt.subplot(7, 7, n + 1)
plt.imshow(X_test[i])
plt.axis("off")
plt.title(get_encode(y_train[i]))
# ## Building Model
# building model
X_train = np.array(X_train)
X_test = np.array(X_test)
y_train = np.array(y_train)
y_test = np.array(y_test)
print(f"X_train shape is {X_train.shape}")
print(f"X_test shape is {X_test.shape}")
print(f"y_train shape is {y_train.shape}")
print(f"y_test shape is {y_test.shape}")
# ## Normalize data
X_train = X_train / 255
X_test = X_test / 255
KerasModel = keras.models.Sequential(
[
keras.layers.Conv2D(
32, kernel_size=(5, 5), activation="relu", input_shape=(100, 100, 3)
),
keras.layers.MaxPool2D(2, 2),
keras.layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
keras.layers.MaxPool2D(2, 2),
keras.layers.Conv2D(128, kernel_size=(3, 3), activation="relu"),
keras.layers.MaxPool2D(2, 2),
keras.layers.BatchNormalization(),
keras.layers.Flatten(),
keras.layers.Dense(128, activation="leaky_relu"),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dropout(rate=0.4),
keras.layers.Dense(32, activation="relu"),
keras.layers.Dropout(rate=0.4),
keras.layers.Dense(4, activation="softmax"),
]
)
KerasModel.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
print("Model Details are : ")
print(KerasModel.summary())
ThisModel = KerasModel.fit(X_train, y_train, epochs=10, verbose=1)
ModelLoss, ModelAccuracy = KerasModel.evaluate(X_test, y_test)
print("Test Loss is {}".format(ModelLoss))
print("Test Accuracy is {}".format(ModelAccuracy))
| false | 0 | 1,898 | 0 | 1,898 | 1,898 |
||
129676202
|
<jupyter_start><jupyter_text>Glasses and Coverings
## New
Please see an updated dataset with extended category list: [Face Attributes Grouped](https://www.kaggle.com/datasets/mantasu/face-attributes-grouped)
## About
This dataset contains 4 categories of people with face accessories: plain, eyeglasses, sunglasses, and coverings. All the images were aligned and center-cropped to `256×256`.
> **Note**: the dataset may contain duplicates, imperfect alignment, or outlier images, not representative of the indicated category. There may also be mixed images, e.g., a picture of a person wearing spectacles and a mask.
## Structure
There are a total of 5 directories:
* `plain` [497] - images of regular people's faces
* `glasses` [528] - images of people wearing spectacles
* `sunglasses` [488] - images of people wearing sunglasses
* `sunglasses-imagenet` [580] - additional images of people wearing sunglasses
* `covering` [444] - images of people with covered faces (covered by hair, masks, hoodies, hands, face paint, etc.)
## Collection
All the images were collected from [Google Images](https://images.google.com/) using [google-images-download](https://pypi.org/project/google_images_download/) with the usage rights flag `labeled-for-noncommercial-reuse-with-modification`. The exception is the `sunglasses-imagenet` folder which contains pictures from [image-net](https://www.image-net.org/), specifically downloaded from [images.cv](https://images.cv/dataset/sunglasses-image-classification-dataset).
All the images were aligned with [opencv-python](https://pypi.org/project/opencv-python/) and center-cropped to `256×256` with reflective padding. It is harder to align pictures with various face coverings, therefore, there are fewer images in that category.
## License
This dataset is marked under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/), meaning you can share and modify the data for non-commercial reuse as long as you provide a copyright notice. For the images downloaded from [Google Images](https://images.google.com/), you can find the original authors' licenses by looking up the image metadata based on their file names.
Kaggle dataset identifier: glasses-and-coverings
<jupyter_script># ### In this kernel,
# I decided to use **G**enerative **A**dversarial **N**etwork on images with sunglasses to create synthetic images of people wearing glasses.
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
from torch.utils.data import DataLoader, Subset
import torchvision.utils as vutils
from torch.autograd import Variable
from torch import nn, optim
import torch.nn.functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import numpy as np
import cv2
from PIL import Image
# ### Importing the images.
# Using pytorch's ImageFolder and DataLoader we can create a custom wrapper to store the images at one discrete place.
# This is analogous to Tensorflow's ImagaDataGenerator if you want to correspond it to TF.
# Since I decided to take only the subset with the sunglasses, I removed the 'coverings' and 'plain' folder images and only kept the ones with sunglasses.
BATCH_SIZE = 32
IMAGE_SIZE = 64
transform = transforms.Compose(
[
transforms.Resize(IMAGE_SIZE),
transforms.ColorJitter(brightness=(1, 1.5)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
train_data = datasets.ImageFolder(
"/kaggle/input/glasses-and-coverings/glasses-and-coverings", transform=transform
)
idx = [i for i in range(len(train_data)) if train_data.imgs[i][1] not in [0, 2]]
# idx += [i for i in range(len(train_data)) if train_data.imgs[i][1] != train_data.class_to_idx['plain']]
dataloader = DataLoader(Subset(train_data, idx), shuffle=True, batch_size=BATCH_SIZE)
len(dataloader)
# ### Sample Images
imgs, label = next(iter(dataloader))
imgs = imgs.numpy().transpose(0, 2, 3, 1)
for i in range(6):
plt.subplot(3, 2, i + 1)
plt.imshow(imgs[i])
# ### GAN
# GAN is essentially a 2 player game between two modules of the architecture.
# It has the following components:
# - Generator: Which creates synthetic images and passes it to discriminator.
# - Discriminator: Which classifies between a real and a synthetic image.
class Generator2(nn.Module):
def __init__(self, nz=128, channels=3):
super(Generator2, self).__init__()
self.nz = nz
self.channels = channels
def convlayer(n_input, n_output, k_size=4, stride=2, padding=0):
block = [
nn.ConvTranspose2d(
n_input,
n_output,
kernel_size=k_size,
stride=stride,
padding=padding,
bias=False,
),
nn.BatchNorm2d(n_output),
nn.ReLU(inplace=True),
]
return block
self.model = nn.Sequential(
*convlayer(
self.nz, 1024, 4, 1, 0
), # Fully connected layer via convolution.
*convlayer(1024, 512, 4, 2, 1),
*convlayer(512, 256, 4, 2, 1),
*convlayer(256, 128, 4, 2, 1),
*convlayer(128, 64, 4, 2, 1),
nn.ConvTranspose2d(64, self.channels, 3, 1, 1),
nn.Tanh(),
)
def forward(self, z):
z = z.view(-1, self.nz, 1, 1)
img = self.model(z)
return img
class Discriminator2(nn.Module):
def __init__(self, channels=3):
super(Discriminator2, self).__init__()
self.channels = channels
def convlayer(n_input, n_output, k_size=4, stride=2, padding=0, bn=False):
block = [
nn.Conv2d(
n_input,
n_output,
kernel_size=k_size,
stride=stride,
padding=padding,
bias=False,
)
]
if bn:
block.append(nn.BatchNorm2d(n_output))
block.append(nn.LeakyReLU(0.2, inplace=True))
return block
self.model = nn.Sequential(
*convlayer(self.channels, 32, 4, 2, 1),
*convlayer(32, 64, 4, 2, 1),
*convlayer(64, 128, 4, 2, 1, bn=True),
*convlayer(128, 256, 4, 2, 1, bn=True),
nn.Conv2d(256, 1, 4, 1, 0, bias=False), # FC with Conv.
)
def forward(self, imgs):
logits = self.model(imgs)
out = torch.sigmoid(logits)
return out.view(-1, 1)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
netG = Generator2().to(device)
netD = Discriminator2().to(device)
# ### Hyperparameter:
EPOCH = 50
LR = 0.0005
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=LR, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=LR, betas=(0.5, 0.999))
real_label = 1.0
fake_label = 0.0
G_losses = []
D_losses = []
# The training method involves 3 step:
# 1. Pass the real images with the label corresponding to real images (1) to the discriminator to train the Discriminator.
# 2. Create random number matrix from standard normal distribution and pass them to the Generator to create the 'fake' or synthetic images.
# 3. Pass these synthetic images to the Discriminator with a label corresponding to fake label to train that the images are fake.
for epoch in range(EPOCH):
for ii, (real_images, train_labels) in enumerate(dataloader):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
# train with real
netD.zero_grad()
real_images = real_images.to(device)
batch_size = real_images.size(0)
labels = torch.full((batch_size, 1), real_label, device=device)
output = netD(real_images)
# print(real_images.size(), output.size(), labels.size())
errD_real = criterion(output, labels)
errD_real.backward()
D_x = output.mean().item()
# train with fake
noise = torch.randn(batch_size, 128, 1, 1, device=device)
fake = netG(noise)
labels.fill_(fake_label)
output = netD(fake.detach())
# print(real_images.size(), output.size(), labels.size())
errD_fake = criterion(output, labels)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
labels.fill_(real_label) # fake labels are real for generator cost
output = netD(fake)
errG = criterion(output, labels)
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step()
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
if (ii + 1) % (len(dataloader) // 2) == 0:
print(
f"[{epoch+1}/{EPOCH}][{ii+1}/{len(dataloader)}] Loss_D: {errD.item()} Loss_G: {errG.item()}"
)
# If you notice carefully, there's a pattern over here, whenever the Generator Loss (Loss_G) decreases, the Discriminator Loss increases (Loss_D)
# This proves the Adversarial nature of GANs.
random_vectors = torch.normal(0.5, 0.5, size=(16, 128, 1, 1), device=device)
fake = netG(random_vectors)
images = fake.to("cpu").clone().detach()
images = images.numpy().transpose(0, 2, 3, 1)
plt.figure(figsize=(15, 18))
for i in range(16):
plt.subplot(8, 2, i + 1)
plt.imshow(images[i])
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/676/129676202.ipynb
|
glasses-and-coverings
|
mantasu
|
[{"Id": 129676202, "ScriptId": 38345178, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8095214, "CreationDate": "05/15/2023 16:55:52", "VersionNumber": 1.0, "Title": "GANS Intro", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 209.0, "LinesInsertedFromPrevious": 209.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 6}]
|
[{"Id": 185992743, "KernelVersionId": 129676202, "SourceDatasetVersionId": 5124767}]
|
[{"Id": 5124767, "DatasetId": 2976601, "DatasourceVersionId": 5196215, "CreatorUserId": 5598802, "LicenseName": "Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)", "CreationDate": "03/08/2023 00:34:23", "VersionNumber": 2.0, "Title": "Glasses and Coverings", "Slug": "glasses-and-coverings", "Subtitle": "Glasses | Sunglasses | Masks", "Description": "## New\n\nPlease see an updated dataset with extended category list: [Face Attributes Grouped](https://www.kaggle.com/datasets/mantasu/face-attributes-grouped)\n\n## About\n\nThis dataset contains 4 categories of people with face accessories: plain, eyeglasses, sunglasses, and coverings. All the images were aligned and center-cropped to `256\u00d7256`.\n\n> **Note**: the dataset may contain duplicates, imperfect alignment, or outlier images, not representative of the indicated category. There may also be mixed images, e.g., a picture of a person wearing spectacles and a mask.\n\n## Structure\n\nThere are a total of 5 directories:\n* `plain` [497] - images of regular people's faces\n* `glasses` [528] - images of people wearing spectacles\n* `sunglasses` [488] - images of people wearing sunglasses\n* `sunglasses-imagenet` [580] - additional images of people wearing sunglasses\n* `covering` [444] - images of people with covered faces (covered by hair, masks, hoodies, hands, face paint, etc.)\n\n## Collection\n\nAll the images were collected from [Google Images](https://images.google.com/) using [google-images-download](https://pypi.org/project/google_images_download/) with the usage rights flag `labeled-for-noncommercial-reuse-with-modification`. The exception is the `sunglasses-imagenet` folder which contains pictures from [image-net](https://www.image-net.org/), specifically downloaded from [images.cv](https://images.cv/dataset/sunglasses-image-classification-dataset).\n\nAll the images were aligned with [opencv-python](https://pypi.org/project/opencv-python/) and center-cropped to `256\u00d7256` with reflective padding. It is harder to align pictures with various face coverings, therefore, there are fewer images in that category.\n\n## License\n\nThis dataset is marked under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/), meaning you can share and modify the data for non-commercial reuse as long as you provide a copyright notice. For the images downloaded from [Google Images](https://images.google.com/), you can find the original authors' licenses by looking up the image metadata based on their file names.", "VersionNotes": "Glasses and Coverings", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2976601, "CreatorUserId": 5598802, "OwnerUserId": 5598802.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5124767.0, "CurrentDatasourceVersionId": 5196215.0, "ForumId": 3015125, "Type": 2, "CreationDate": "03/08/2023 00:31:41", "LastActivityDate": "03/08/2023", "TotalViews": 2582, "TotalDownloads": 285, "TotalVotes": 13, "TotalKernels": 2}]
|
[{"Id": 5598802, "UserName": "mantasu", "DisplayName": "Mantas", "RegisterDate": "08/09/2020", "PerformanceTier": 2}]
|
# ### In this kernel,
# I decided to use **G**enerative **A**dversarial **N**etwork on images with sunglasses to create synthetic images of people wearing glasses.
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
from torch.utils.data import DataLoader, Subset
import torchvision.utils as vutils
from torch.autograd import Variable
from torch import nn, optim
import torch.nn.functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import numpy as np
import cv2
from PIL import Image
# ### Importing the images.
# Using pytorch's ImageFolder and DataLoader we can create a custom wrapper to store the images at one discrete place.
# This is analogous to Tensorflow's ImagaDataGenerator if you want to correspond it to TF.
# Since I decided to take only the subset with the sunglasses, I removed the 'coverings' and 'plain' folder images and only kept the ones with sunglasses.
BATCH_SIZE = 32
IMAGE_SIZE = 64
transform = transforms.Compose(
[
transforms.Resize(IMAGE_SIZE),
transforms.ColorJitter(brightness=(1, 1.5)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
train_data = datasets.ImageFolder(
"/kaggle/input/glasses-and-coverings/glasses-and-coverings", transform=transform
)
idx = [i for i in range(len(train_data)) if train_data.imgs[i][1] not in [0, 2]]
# idx += [i for i in range(len(train_data)) if train_data.imgs[i][1] != train_data.class_to_idx['plain']]
dataloader = DataLoader(Subset(train_data, idx), shuffle=True, batch_size=BATCH_SIZE)
len(dataloader)
# ### Sample Images
imgs, label = next(iter(dataloader))
imgs = imgs.numpy().transpose(0, 2, 3, 1)
for i in range(6):
plt.subplot(3, 2, i + 1)
plt.imshow(imgs[i])
# ### GAN
# GAN is essentially a 2 player game between two modules of the architecture.
# It has the following components:
# - Generator: Which creates synthetic images and passes it to discriminator.
# - Discriminator: Which classifies between a real and a synthetic image.
class Generator2(nn.Module):
def __init__(self, nz=128, channels=3):
super(Generator2, self).__init__()
self.nz = nz
self.channels = channels
def convlayer(n_input, n_output, k_size=4, stride=2, padding=0):
block = [
nn.ConvTranspose2d(
n_input,
n_output,
kernel_size=k_size,
stride=stride,
padding=padding,
bias=False,
),
nn.BatchNorm2d(n_output),
nn.ReLU(inplace=True),
]
return block
self.model = nn.Sequential(
*convlayer(
self.nz, 1024, 4, 1, 0
), # Fully connected layer via convolution.
*convlayer(1024, 512, 4, 2, 1),
*convlayer(512, 256, 4, 2, 1),
*convlayer(256, 128, 4, 2, 1),
*convlayer(128, 64, 4, 2, 1),
nn.ConvTranspose2d(64, self.channels, 3, 1, 1),
nn.Tanh(),
)
def forward(self, z):
z = z.view(-1, self.nz, 1, 1)
img = self.model(z)
return img
class Discriminator2(nn.Module):
def __init__(self, channels=3):
super(Discriminator2, self).__init__()
self.channels = channels
def convlayer(n_input, n_output, k_size=4, stride=2, padding=0, bn=False):
block = [
nn.Conv2d(
n_input,
n_output,
kernel_size=k_size,
stride=stride,
padding=padding,
bias=False,
)
]
if bn:
block.append(nn.BatchNorm2d(n_output))
block.append(nn.LeakyReLU(0.2, inplace=True))
return block
self.model = nn.Sequential(
*convlayer(self.channels, 32, 4, 2, 1),
*convlayer(32, 64, 4, 2, 1),
*convlayer(64, 128, 4, 2, 1, bn=True),
*convlayer(128, 256, 4, 2, 1, bn=True),
nn.Conv2d(256, 1, 4, 1, 0, bias=False), # FC with Conv.
)
def forward(self, imgs):
logits = self.model(imgs)
out = torch.sigmoid(logits)
return out.view(-1, 1)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
netG = Generator2().to(device)
netD = Discriminator2().to(device)
# ### Hyperparameter:
EPOCH = 50
LR = 0.0005
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=LR, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=LR, betas=(0.5, 0.999))
real_label = 1.0
fake_label = 0.0
G_losses = []
D_losses = []
# The training method involves 3 step:
# 1. Pass the real images with the label corresponding to real images (1) to the discriminator to train the Discriminator.
# 2. Create random number matrix from standard normal distribution and pass them to the Generator to create the 'fake' or synthetic images.
# 3. Pass these synthetic images to the Discriminator with a label corresponding to fake label to train that the images are fake.
for epoch in range(EPOCH):
for ii, (real_images, train_labels) in enumerate(dataloader):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
# train with real
netD.zero_grad()
real_images = real_images.to(device)
batch_size = real_images.size(0)
labels = torch.full((batch_size, 1), real_label, device=device)
output = netD(real_images)
# print(real_images.size(), output.size(), labels.size())
errD_real = criterion(output, labels)
errD_real.backward()
D_x = output.mean().item()
# train with fake
noise = torch.randn(batch_size, 128, 1, 1, device=device)
fake = netG(noise)
labels.fill_(fake_label)
output = netD(fake.detach())
# print(real_images.size(), output.size(), labels.size())
errD_fake = criterion(output, labels)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
labels.fill_(real_label) # fake labels are real for generator cost
output = netD(fake)
errG = criterion(output, labels)
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step()
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
if (ii + 1) % (len(dataloader) // 2) == 0:
print(
f"[{epoch+1}/{EPOCH}][{ii+1}/{len(dataloader)}] Loss_D: {errD.item()} Loss_G: {errG.item()}"
)
# If you notice carefully, there's a pattern over here, whenever the Generator Loss (Loss_G) decreases, the Discriminator Loss increases (Loss_D)
# This proves the Adversarial nature of GANs.
random_vectors = torch.normal(0.5, 0.5, size=(16, 128, 1, 1), device=device)
fake = netG(random_vectors)
images = fake.to("cpu").clone().detach()
images = images.numpy().transpose(0, 2, 3, 1)
plt.figure(figsize=(15, 18))
for i in range(16):
plt.subplot(8, 2, i + 1)
plt.imshow(images[i])
| false | 0 | 2,273 | 6 | 2,867 | 2,273 |
||
129676176
|
<jupyter_start><jupyter_text>Audio MNIST
### Context
A Large dataset of Audio MNIST, 30000 audio samples of spoken digits (0-9) of 60 different speakers.
### Content
**data (audioMNIST)**
- The dataset consists of 30000 audio samples of spoken digits (0-9) of 60 folders and 500 files each.
- There is one directory per speaker holding the audio recordings.
- Additionally "audioMNIST_meta.txt" provides meta information such as gender or age of each speaker.
Kaggle dataset identifier: audio-mnist
<jupyter_script>import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import librosa
from tqdm import tqdm
from IPython.display import Audio
from IPython.display import clear_output
import math
sns.set_style("darkgrid")
plt.rcParams["figure.figsize"] = (20, 10)
# # Data
base_dir = "/kaggle/input/audio-mnist/data"
file_paths = []
for i in tqdm(os.listdir(base_dir)[:30]):
try:
for audio in os.listdir(os.path.join(base_dir, i)):
file_paths.append(os.path.join(base_dir, i, audio))
except Exception as e:
pass
len(file_paths)
# # load audio using librosa
# ### Demo Audio
audio, sr = librosa.load(file_paths[0])
audio = librosa.util.normalize(audio)
audiowgt = Audio(data=audio, rate=sr)
display(audiowgt)
librosa.display.waveshow(audio, sr=sr)
audio2, sr = librosa.load(file_paths[52])
audio2 = librosa.util.normalize(audio2)
audiowgt2 = Audio(data=audio2, rate=sr)
display(audiowgt2)
librosa.display.waveshow(audio2, sr=sr)
# ### Helper Functions
def load_audio(path):
"""Loads audio and also center pads them to max len 1 second
Args:
path: Audio File Path
Returns:
Audio : center padded audio with max len 1 second
"""
audio, sr = librosa.load(path)
# lets pad for 1 second
len_audio = librosa.get_duration(y=audio, sr=sr)
if len_audio >= 1:
return audio[: 1 * sr], sr
else:
pad_audio = librosa.util.pad_center(data=audio, size=(1 * sr))
return pad_audio, sr
def get_white_noise(signal, SNR=40):
RMS_s = math.sqrt(np.mean(signal**2))
RMS_n = math.sqrt(RMS_s**2 / (pow(10, SNR / 10)))
STD_n = RMS_n
noise = np.random.normal(0, STD_n, signal.shape[0])
return noise
def get_mfcc(path, snr=40, aug=False):
audio, sr = load_audio(path)
audio = librosa.util.normalize(audio)
mfcc = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=13, hop_length=512, n_fft=2048)
if aug:
aug_audio = audio + get_white_noise(audio, snr)
aug_mfcc = librosa.feature.mfcc(
y=aug_audio, sr=sr, n_mfcc=13, hop_length=512, n_fft=2048
)
return [np.expand_dims(mfcc, axis=-1), np.expand_dims(aug_mfcc, axis=-1)]
return np.expand_dims(mfcc, axis=-1)
X = []
Y = []
k = 0
aug = True
for i in file_paths:
k = k + 1
display(k)
clss = i.split("/")[-1].split(".")[0].split("_")[0]
mfcs = get_mfcc(i, aug=True)
for j in mfcs:
X.append(j)
Y.append(int(clss))
clear_output(wait=True)
# # model
from sklearn.model_selection import train_test_split
xt, xv, yt, yv = train_test_split(
np.asarray(X), np.asarray(Y), test_size=0.2, random_state=42
)
def plot(history, epochs, model=None, save=False):
hist = history.history
plt.subplot(1, 2, 1)
plt.title("Loss Graph")
plt.plot(range(epochs), hist["loss"], color="r", label="Loss")
plt.plot(range(epochs), hist["val_loss"], color="g", label="Val_loss")
plt.legend()
plt.tight_layout()
plt.subplot(1, 2, 2)
plt.title("Accuracy Graph")
plt.plot(range(epochs), hist["accuracy"], color="r", label="accuracy")
plt.plot(range(epochs), hist["val_accuracy"], color="g", label="Val_accuracy")
plt.legend()
plt.tight_layout()
plt.suptitle("Loss-Accuracy Plot")
l_low, a_max = min(hist["val_loss"]), max(hist["val_accuracy"])
if save and model != None:
plt.savefig(f"./progress/history plot/lmin{l_low}_amin{a_max}.png")
model.save(f"./progress/model/lmin{l_low}_amin{a_max}.h5")
print("Plot and Model Saved")
import tensorflow as tf
from tensorflow.keras.layers import (
Conv2D,
AveragePooling2D,
GlobalAvgPool2D,
Dense,
InputLayer,
)
Input_shape = xv[0].shape
Input_shape
model = tf.keras.models.Sequential(
[
InputLayer(input_shape=Input_shape),
Conv2D(64, (3), padding="valid", activation="relu", name="conv_1"),
Conv2D(64, (3), padding="valid", activation="relu", name="conv_2"),
AveragePooling2D(pool_size=(3), strides=(2), padding="same", name="avg_1"),
Conv2D(128, (3), padding="valid", activation="relu", name="conv_3"),
GlobalAvgPool2D(),
Dense(64, activation="relu"),
Dense(32, activation="relu"),
Dense(10, activation="softmax"),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=["accuracy"],
)
model.summary()
es_cb = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=5)
history = model.fit(
xt, yt, batch_size=32, epochs=100, validation_data=(xv, yv), callbacks=[es_cb]
)
plot(history, len(history.history["loss"]))
# # Test
pred = model.predict(xt)
pred.shape
preds = [np.argmax(i) for i in pred]
from sklearn.metrics import classification_report
print(classification_report(yt, preds))
from sklearn.metrics import confusion_matrix
cfm = confusion_matrix(yt, preds)
sns.heatmap(cfm, annot=True, fmt="d")
plt.title("Confusion Matrix")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/676/129676176.ipynb
|
audio-mnist
|
sripaadsrinivasan
|
[{"Id": 129676176, "ScriptId": 38515455, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9517972, "CreationDate": "05/15/2023 16:55:38", "VersionNumber": 1.0, "Title": "MNIST_AUDIO_tensorflow", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 167.0, "LinesInsertedFromPrevious": 167.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185992687, "KernelVersionId": 129676176, "SourceDatasetVersionId": 1788543}]
|
[{"Id": 1788543, "DatasetId": 1063054, "DatasourceVersionId": 1825950, "CreatorUserId": 2390050, "LicenseName": "CC0: Public Domain", "CreationDate": "12/28/2020 07:34:33", "VersionNumber": 1.0, "Title": "Audio MNIST", "Slug": "audio-mnist", "Subtitle": "Audio Samples of spoken digits (0-9) of 60 different speakers.", "Description": "### Context\n\nA Large dataset of Audio MNIST, 30000 audio samples of spoken digits (0-9) of 60 different speakers.\n\n\n### Content\n**data (audioMNIST)**\n- The dataset consists of 30000 audio samples of spoken digits (0-9) of 60 folders and 500 files each.\n- There is one directory per speaker holding the audio recordings.\n- Additionally \"audioMNIST_meta.txt\" provides meta information such as gender or age of each speaker.\n\n### Acknowledgements\n\n[Here's](https://github.com/soerenab/AudioMNIST) the original git repo for the project.\n\n### Inspiration\n\nA high quality Audio MNIST was missing.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1063054, "CreatorUserId": 2390050, "OwnerUserId": 2390050.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1788543.0, "CurrentDatasourceVersionId": 1825950.0, "ForumId": 1080111, "Type": 2, "CreationDate": "12/28/2020 07:34:33", "LastActivityDate": "12/28/2020", "TotalViews": 15597, "TotalDownloads": 1639, "TotalVotes": 43, "TotalKernels": 19}]
|
[{"Id": 2390050, "UserName": "sripaadsrinivasan", "DisplayName": "Sripaad Srinivasan", "RegisterDate": "10/22/2018", "PerformanceTier": 2}]
|
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import librosa
from tqdm import tqdm
from IPython.display import Audio
from IPython.display import clear_output
import math
sns.set_style("darkgrid")
plt.rcParams["figure.figsize"] = (20, 10)
# # Data
base_dir = "/kaggle/input/audio-mnist/data"
file_paths = []
for i in tqdm(os.listdir(base_dir)[:30]):
try:
for audio in os.listdir(os.path.join(base_dir, i)):
file_paths.append(os.path.join(base_dir, i, audio))
except Exception as e:
pass
len(file_paths)
# # load audio using librosa
# ### Demo Audio
audio, sr = librosa.load(file_paths[0])
audio = librosa.util.normalize(audio)
audiowgt = Audio(data=audio, rate=sr)
display(audiowgt)
librosa.display.waveshow(audio, sr=sr)
audio2, sr = librosa.load(file_paths[52])
audio2 = librosa.util.normalize(audio2)
audiowgt2 = Audio(data=audio2, rate=sr)
display(audiowgt2)
librosa.display.waveshow(audio2, sr=sr)
# ### Helper Functions
def load_audio(path):
"""Loads audio and also center pads them to max len 1 second
Args:
path: Audio File Path
Returns:
Audio : center padded audio with max len 1 second
"""
audio, sr = librosa.load(path)
# lets pad for 1 second
len_audio = librosa.get_duration(y=audio, sr=sr)
if len_audio >= 1:
return audio[: 1 * sr], sr
else:
pad_audio = librosa.util.pad_center(data=audio, size=(1 * sr))
return pad_audio, sr
def get_white_noise(signal, SNR=40):
RMS_s = math.sqrt(np.mean(signal**2))
RMS_n = math.sqrt(RMS_s**2 / (pow(10, SNR / 10)))
STD_n = RMS_n
noise = np.random.normal(0, STD_n, signal.shape[0])
return noise
def get_mfcc(path, snr=40, aug=False):
audio, sr = load_audio(path)
audio = librosa.util.normalize(audio)
mfcc = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=13, hop_length=512, n_fft=2048)
if aug:
aug_audio = audio + get_white_noise(audio, snr)
aug_mfcc = librosa.feature.mfcc(
y=aug_audio, sr=sr, n_mfcc=13, hop_length=512, n_fft=2048
)
return [np.expand_dims(mfcc, axis=-1), np.expand_dims(aug_mfcc, axis=-1)]
return np.expand_dims(mfcc, axis=-1)
X = []
Y = []
k = 0
aug = True
for i in file_paths:
k = k + 1
display(k)
clss = i.split("/")[-1].split(".")[0].split("_")[0]
mfcs = get_mfcc(i, aug=True)
for j in mfcs:
X.append(j)
Y.append(int(clss))
clear_output(wait=True)
# # model
from sklearn.model_selection import train_test_split
xt, xv, yt, yv = train_test_split(
np.asarray(X), np.asarray(Y), test_size=0.2, random_state=42
)
def plot(history, epochs, model=None, save=False):
hist = history.history
plt.subplot(1, 2, 1)
plt.title("Loss Graph")
plt.plot(range(epochs), hist["loss"], color="r", label="Loss")
plt.plot(range(epochs), hist["val_loss"], color="g", label="Val_loss")
plt.legend()
plt.tight_layout()
plt.subplot(1, 2, 2)
plt.title("Accuracy Graph")
plt.plot(range(epochs), hist["accuracy"], color="r", label="accuracy")
plt.plot(range(epochs), hist["val_accuracy"], color="g", label="Val_accuracy")
plt.legend()
plt.tight_layout()
plt.suptitle("Loss-Accuracy Plot")
l_low, a_max = min(hist["val_loss"]), max(hist["val_accuracy"])
if save and model != None:
plt.savefig(f"./progress/history plot/lmin{l_low}_amin{a_max}.png")
model.save(f"./progress/model/lmin{l_low}_amin{a_max}.h5")
print("Plot and Model Saved")
import tensorflow as tf
from tensorflow.keras.layers import (
Conv2D,
AveragePooling2D,
GlobalAvgPool2D,
Dense,
InputLayer,
)
Input_shape = xv[0].shape
Input_shape
model = tf.keras.models.Sequential(
[
InputLayer(input_shape=Input_shape),
Conv2D(64, (3), padding="valid", activation="relu", name="conv_1"),
Conv2D(64, (3), padding="valid", activation="relu", name="conv_2"),
AveragePooling2D(pool_size=(3), strides=(2), padding="same", name="avg_1"),
Conv2D(128, (3), padding="valid", activation="relu", name="conv_3"),
GlobalAvgPool2D(),
Dense(64, activation="relu"),
Dense(32, activation="relu"),
Dense(10, activation="softmax"),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=["accuracy"],
)
model.summary()
es_cb = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=5)
history = model.fit(
xt, yt, batch_size=32, epochs=100, validation_data=(xv, yv), callbacks=[es_cb]
)
plot(history, len(history.history["loss"]))
# # Test
pred = model.predict(xt)
pred.shape
preds = [np.argmax(i) for i in pred]
from sklearn.metrics import classification_report
print(classification_report(yt, preds))
from sklearn.metrics import confusion_matrix
cfm = confusion_matrix(yt, preds)
sns.heatmap(cfm, annot=True, fmt="d")
plt.title("Confusion Matrix")
plt.show()
| false | 0 | 1,729 | 0 | 1,873 | 1,729 |
||
129601226
|
# library imports
import pandas as pd
import spacy
from spacy import displacy
from spacy.tokens import DocBin
import json
from datetime import datetime
from tqdm import tqdm
import re
# this dictionary will contain all annotated examples
collective_dict = {"TRAINING_DATA": []}
def structure_training_data(text, kw_list):
results = []
entities = []
# search for instances of keywords within the text (ignoring letter case)
for kw in tqdm(kw_list):
search = re.finditer(kw, text, flags=re.IGNORECASE)
# store the start/end character positions
all_instances = [[m.start(), m.end()] for m in search]
# if the callable_iterator found matches, create an 'entities' list
if len(all_instances) > 0:
for i in all_instances:
start = i[0]
end = i[1]
entities.append((start, end, "BOOK"))
# alert when no matches are found given the user inputs
else:
print("No pattern matches found. Keyword:", kw)
# add any found entities into a JSON format within collective_dict
if len(entities) > 0:
results = [text, {"entities": entities}]
collective_dict["TRAINING_DATA"].append(results)
return
# this dictionary will contain all annotated examples
collective_dict = {"TRAINING_DATA": []}
def structure_training_data(text, kw_list):
results = []
entities = []
# search for instances of keywords within the text (ignoring letter case)
for kw in tqdm(kw_list):
search = re.finditer(kw, text, flags=re.IGNORECASE)
# store the start/end character positions
all_instances = [[m.start(), m.end()] for m in search]
# if the callable_iterator found matches, create an 'entities' list
if len(all_instances) > 0:
for i in all_instances:
start = i[0]
end = i[1]
entities.append((start, end, "BOOK"))
# alert when no matches are found given the user inputs
else:
print("No pattern matches found. Keyword:", kw)
# add any found entities into a JSON format within collective_dict
if len(entities) > 0:
results = [text, {"entities": entities}]
collective_dict["TRAINING_DATA"].append(results)
return
text1 = "To Kill a Mockingbird is a novel by Harper Lee, published in 1960.\
The Catcher in the Rye is a novel by J. D. Salinger, published in 1951.\
One Hundred Years of Solitude is a novel by Gabriel Garcia Marquez, published in 1967."
text2 = "The Great Gatsby is a novel by F. Scott Fitzgerald, published in 1925.\
1984 is a dystopian novel by George Orwell, published in 1949.\
Animal Farm is an allegorical novella by George Orwell, first published in England on 17 August 1945.\
Brave New World is a dystopian novel by Aldous Huxley, published in 1932."
text3 = "The Lord of the Rings is an epic high fantasy novel by J. R. R. Tolkien, first published in 1954.\
The Hobbit, or There and Back Again is a children's fantasy novel by J. R. R. Tolkien, published in 1937.\
Pride and Prejudice is a romantic novel of manners written by Jane Austen in 1813."
text4 = "The Da Vinci Code is a 2003 mystery thriller novel by Dan Brown.\
The Alchemist is a novel by Paulo Coelho.\
Harry Potter and the Philosophers Stone is a fantasy novel by J.K. Rowling, first published in 1997."
# TRAINING
structure_training_data(
text1,
[
"To Kill a Mockingbird",
"The Catcher in the Rye",
"One Hundred Years of Solitude",
],
)
structure_training_data(
text2, ["The Great Gatsby", "1984", "Animal Farm", "Brave New World"]
)
structure_training_data(
text3,
[
"The Lord of the Rings",
"The Hobbit, or There and Back Again",
"Pride and Prejudice",
],
)
structure_training_data(
text4,
["The Da Vinci Code", "The Alchemist", "Harry Potter and the Philosophers Stone"],
)
import os
os.mkdir("/kaggle/TRAIN_DATA")
# define our training data to TRAIN_DATA
TRAIN_DATA = collective_dict["TRAINING_DATA"]
# create a blank model
nlp = spacy.blank("en")
def create_training(TRAIN_DATA):
db = DocBin()
for text, annot in tqdm(TRAIN_DATA):
doc = nlp.make_doc(text)
ents = []
# create span objects
for start, end, label in annot["entities"]:
span = doc.char_span(start, end, label=label, alignment_mode="contract")
# skip if the character indices do not map to a valid span
if span is None:
print("Skipping entity.")
else:
ents.append(span)
# handle erroneous entity annotations by removing them
try:
doc.ents = ents
except:
# print("BAD SPAN:", span, "\n")
ents.pop()
doc.ents = ents
# pack Doc objects into DocBin
db.add(doc)
return db
TRAIN_DATA_DOC = create_training(TRAIN_DATA)
# Export results (here I add it to a TRAIN_DATA folder within the directory)
TRAIN_DATA_DOC.to_disk("/kaggle/TRAIN_DATA/TRAIN_DATA.spacy")
# --gpu-id 0
# load the trained model
nlp_output = spacy.load("/kaggle/working/output/model-best")
def model_visualization(text):
# pass our test instance into the trained pipeline
doc = nlp_output(text)
# customize the label colors
colors = {"BOOK": "linear-gradient(90deg, #E1D436, #F59710)"}
options = {"ents": ["BOOK"], "colors": colors}
# visualize the identified entities
displacy.render(doc, style="ent", options=options, minify=True, jupyter=True)
# print out the identified entities
print("\nIDENTIFIED ENTITIES:")
[print(ent.text) for ent in doc.ents if ent.label_ == "BOOK"]
test1 = "The Girl with the Dragon Tattoo is a psychological thriller novel by Stieg Larsson\
The Bible is a religious text.\
Harry Potter and The Goblet of Fire is also written by J.K Rowling."
model_visualization(test1)
test2 = "War and Peace is a novel written by Leo Tolstoy and sold million copies. \
The Hunger Games is a series of novels written by Suzane around the early 2000s. \
The Divergent is also a series of the teen dystopia genre along with The Maze Runner series."
model_visualization(test2)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/601/129601226.ipynb
| null | null |
[{"Id": 129601226, "ScriptId": 38462273, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15036313, "CreationDate": "05/15/2023 06:53:41", "VersionNumber": 2.0, "Title": "Book CNER V1", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 178.0, "LinesInsertedFromPrevious": 34.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 144.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# library imports
import pandas as pd
import spacy
from spacy import displacy
from spacy.tokens import DocBin
import json
from datetime import datetime
from tqdm import tqdm
import re
# this dictionary will contain all annotated examples
collective_dict = {"TRAINING_DATA": []}
def structure_training_data(text, kw_list):
results = []
entities = []
# search for instances of keywords within the text (ignoring letter case)
for kw in tqdm(kw_list):
search = re.finditer(kw, text, flags=re.IGNORECASE)
# store the start/end character positions
all_instances = [[m.start(), m.end()] for m in search]
# if the callable_iterator found matches, create an 'entities' list
if len(all_instances) > 0:
for i in all_instances:
start = i[0]
end = i[1]
entities.append((start, end, "BOOK"))
# alert when no matches are found given the user inputs
else:
print("No pattern matches found. Keyword:", kw)
# add any found entities into a JSON format within collective_dict
if len(entities) > 0:
results = [text, {"entities": entities}]
collective_dict["TRAINING_DATA"].append(results)
return
# this dictionary will contain all annotated examples
collective_dict = {"TRAINING_DATA": []}
def structure_training_data(text, kw_list):
results = []
entities = []
# search for instances of keywords within the text (ignoring letter case)
for kw in tqdm(kw_list):
search = re.finditer(kw, text, flags=re.IGNORECASE)
# store the start/end character positions
all_instances = [[m.start(), m.end()] for m in search]
# if the callable_iterator found matches, create an 'entities' list
if len(all_instances) > 0:
for i in all_instances:
start = i[0]
end = i[1]
entities.append((start, end, "BOOK"))
# alert when no matches are found given the user inputs
else:
print("No pattern matches found. Keyword:", kw)
# add any found entities into a JSON format within collective_dict
if len(entities) > 0:
results = [text, {"entities": entities}]
collective_dict["TRAINING_DATA"].append(results)
return
text1 = "To Kill a Mockingbird is a novel by Harper Lee, published in 1960.\
The Catcher in the Rye is a novel by J. D. Salinger, published in 1951.\
One Hundred Years of Solitude is a novel by Gabriel Garcia Marquez, published in 1967."
text2 = "The Great Gatsby is a novel by F. Scott Fitzgerald, published in 1925.\
1984 is a dystopian novel by George Orwell, published in 1949.\
Animal Farm is an allegorical novella by George Orwell, first published in England on 17 August 1945.\
Brave New World is a dystopian novel by Aldous Huxley, published in 1932."
text3 = "The Lord of the Rings is an epic high fantasy novel by J. R. R. Tolkien, first published in 1954.\
The Hobbit, or There and Back Again is a children's fantasy novel by J. R. R. Tolkien, published in 1937.\
Pride and Prejudice is a romantic novel of manners written by Jane Austen in 1813."
text4 = "The Da Vinci Code is a 2003 mystery thriller novel by Dan Brown.\
The Alchemist is a novel by Paulo Coelho.\
Harry Potter and the Philosophers Stone is a fantasy novel by J.K. Rowling, first published in 1997."
# TRAINING
structure_training_data(
text1,
[
"To Kill a Mockingbird",
"The Catcher in the Rye",
"One Hundred Years of Solitude",
],
)
structure_training_data(
text2, ["The Great Gatsby", "1984", "Animal Farm", "Brave New World"]
)
structure_training_data(
text3,
[
"The Lord of the Rings",
"The Hobbit, or There and Back Again",
"Pride and Prejudice",
],
)
structure_training_data(
text4,
["The Da Vinci Code", "The Alchemist", "Harry Potter and the Philosophers Stone"],
)
import os
os.mkdir("/kaggle/TRAIN_DATA")
# define our training data to TRAIN_DATA
TRAIN_DATA = collective_dict["TRAINING_DATA"]
# create a blank model
nlp = spacy.blank("en")
def create_training(TRAIN_DATA):
db = DocBin()
for text, annot in tqdm(TRAIN_DATA):
doc = nlp.make_doc(text)
ents = []
# create span objects
for start, end, label in annot["entities"]:
span = doc.char_span(start, end, label=label, alignment_mode="contract")
# skip if the character indices do not map to a valid span
if span is None:
print("Skipping entity.")
else:
ents.append(span)
# handle erroneous entity annotations by removing them
try:
doc.ents = ents
except:
# print("BAD SPAN:", span, "\n")
ents.pop()
doc.ents = ents
# pack Doc objects into DocBin
db.add(doc)
return db
TRAIN_DATA_DOC = create_training(TRAIN_DATA)
# Export results (here I add it to a TRAIN_DATA folder within the directory)
TRAIN_DATA_DOC.to_disk("/kaggle/TRAIN_DATA/TRAIN_DATA.spacy")
# --gpu-id 0
# load the trained model
nlp_output = spacy.load("/kaggle/working/output/model-best")
def model_visualization(text):
# pass our test instance into the trained pipeline
doc = nlp_output(text)
# customize the label colors
colors = {"BOOK": "linear-gradient(90deg, #E1D436, #F59710)"}
options = {"ents": ["BOOK"], "colors": colors}
# visualize the identified entities
displacy.render(doc, style="ent", options=options, minify=True, jupyter=True)
# print out the identified entities
print("\nIDENTIFIED ENTITIES:")
[print(ent.text) for ent in doc.ents if ent.label_ == "BOOK"]
test1 = "The Girl with the Dragon Tattoo is a psychological thriller novel by Stieg Larsson\
The Bible is a religious text.\
Harry Potter and The Goblet of Fire is also written by J.K Rowling."
model_visualization(test1)
test2 = "War and Peace is a novel written by Leo Tolstoy and sold million copies. \
The Hunger Games is a series of novels written by Suzane around the early 2000s. \
The Divergent is also a series of the teen dystopia genre along with The Maze Runner series."
model_visualization(test2)
| false | 0 | 1,802 | 0 | 1,802 | 1,802 |
||
129601022
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Import Data
#
train_df = pd.read_csv("/kaggle/input/titanic/train.csv")
train_df.head()
train_df.tail()
train_df.head()
import seaborn as sns
import matplotlib.pyplot as plt
sns.barplot(x="Survived", y="Fare", hue="Pclass", data=train_df)
# The plot shows that highter fare increased the chances of survival
train_df["Fare"].mean()
sns.barplot(y="Survived", x="Sex", hue="Pclass", data=train_df)
# In every Pclass, the survival rate of females was obviously more
sns.histplot(data=train_df, x="Age", hue="Survived", kde=True)
# The grey area in histogram shows overlap. This means - being of age less than 10 (a child) or greater than 75 increased the survival rate. In all other age groups, the blue colour (survived = 0) outnumbers the orange (survived = 1)
sns.catplot(kind="box", data=train_df, x="Sex", y="Fare")
plt.show()
# Were there more women in cheaper cabins as compared to men? Does that mean they were saved despite the class?
sns.heatmap(train_df.corr())
# How are SubSp and Parch affecting the survival or are they even affecting it?
#
sns.violinplot(data=train_df, y="SibSp", x="Sex", hue="Survived")
sns.stripplot(
data=train_df, y="Parch", x="Sex", hue="Survived", jitter=True, dodge=True
)
# # Data Preparation
train_df.Sex[train_df.Sex == "male"] = 0
train_df.Sex[train_df.Sex == "female"] = 1
cols = ["Age", "Sex", "Parch", "SibSp", "Pclass", "Fare"]
for col in cols:
train_df[col].fillna(int(train_df[col].mean()), inplace=True)
from sklearn.model_selection import train_test_split
features = ["Age", "Sex", "Parch", "SibSp", "Pclass", "Fare"]
target = ["Survived"]
X = train_df[features]
y = train_df[target]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=64
)
# What if I remove Age as a feature?
features_ = ["Sex", "Parch", "SibSp", "Pclass", "Fare"]
target_ = ["Survived"]
X_ = train_df[features_]
y_ = train_df[target_]
X_train_, X_test_, y_train_, y_test_ = train_test_split(
X_, y_, test_size=0.2, random_state=42
)
# # Simplest Classifier
from sklearn.tree import DecisionTreeClassifier
Classifier = DecisionTreeClassifier()
# Fit the model
Classifier.fit(X_train, y_train)
# Predict the response for test dataset
y_pred = Classifier.predict(X_test)
# Fit the second model
Classifier.fit(X_train_, y_train_)
# Predict the response for test dataset
y_pred_ = Classifier.predict(X_test_)
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:", metrics.accuracy_score(y_test, y_pred))
print("Accuracy:", metrics.accuracy_score(y_test_, y_pred_))
# The accuracy actually increased. Now lets submit using these features
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
test_data.head()
features = ["Sex", "Parch", "SibSp", "Pclass", "Fare"]
test_data_final = test_data[features]
test_data_final.Sex[test_data_final.Sex == "male"] = 0
test_data_final.Sex[test_data_final.Sex == "female"] = 1
cols = ["Sex", "Parch", "SibSp", "Pclass", "Fare"]
for col in cols:
test_data_final[col].fillna(int(test_data_final[col].mean()), inplace=True)
result = Classifier.predict(test_data_final)
test_data_final["Survived"] = result
test_data_final
# # Submission
submission = pd.concat(
[test_data["PassengerId"], test_data_final["Survived"]], axis=1
).reset_index(drop=True)
submission
submission.to_csv("submission.csv", index=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/601/129601022.ipynb
| null | null |
[{"Id": 129601022, "ScriptId": 38425450, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12706278, "CreationDate": "05/15/2023 06:51:45", "VersionNumber": 3.0, "Title": "First Submission - Simplest Classifier", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 150.0, "LinesInsertedFromPrevious": 24.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 126.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# # Import Data
#
train_df = pd.read_csv("/kaggle/input/titanic/train.csv")
train_df.head()
train_df.tail()
train_df.head()
import seaborn as sns
import matplotlib.pyplot as plt
sns.barplot(x="Survived", y="Fare", hue="Pclass", data=train_df)
# The plot shows that highter fare increased the chances of survival
train_df["Fare"].mean()
sns.barplot(y="Survived", x="Sex", hue="Pclass", data=train_df)
# In every Pclass, the survival rate of females was obviously more
sns.histplot(data=train_df, x="Age", hue="Survived", kde=True)
# The grey area in histogram shows overlap. This means - being of age less than 10 (a child) or greater than 75 increased the survival rate. In all other age groups, the blue colour (survived = 0) outnumbers the orange (survived = 1)
sns.catplot(kind="box", data=train_df, x="Sex", y="Fare")
plt.show()
# Were there more women in cheaper cabins as compared to men? Does that mean they were saved despite the class?
sns.heatmap(train_df.corr())
# How are SubSp and Parch affecting the survival or are they even affecting it?
#
sns.violinplot(data=train_df, y="SibSp", x="Sex", hue="Survived")
sns.stripplot(
data=train_df, y="Parch", x="Sex", hue="Survived", jitter=True, dodge=True
)
# # Data Preparation
train_df.Sex[train_df.Sex == "male"] = 0
train_df.Sex[train_df.Sex == "female"] = 1
cols = ["Age", "Sex", "Parch", "SibSp", "Pclass", "Fare"]
for col in cols:
train_df[col].fillna(int(train_df[col].mean()), inplace=True)
from sklearn.model_selection import train_test_split
features = ["Age", "Sex", "Parch", "SibSp", "Pclass", "Fare"]
target = ["Survived"]
X = train_df[features]
y = train_df[target]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=64
)
# What if I remove Age as a feature?
features_ = ["Sex", "Parch", "SibSp", "Pclass", "Fare"]
target_ = ["Survived"]
X_ = train_df[features_]
y_ = train_df[target_]
X_train_, X_test_, y_train_, y_test_ = train_test_split(
X_, y_, test_size=0.2, random_state=42
)
# # Simplest Classifier
from sklearn.tree import DecisionTreeClassifier
Classifier = DecisionTreeClassifier()
# Fit the model
Classifier.fit(X_train, y_train)
# Predict the response for test dataset
y_pred = Classifier.predict(X_test)
# Fit the second model
Classifier.fit(X_train_, y_train_)
# Predict the response for test dataset
y_pred_ = Classifier.predict(X_test_)
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:", metrics.accuracy_score(y_test, y_pred))
print("Accuracy:", metrics.accuracy_score(y_test_, y_pred_))
# The accuracy actually increased. Now lets submit using these features
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
test_data.head()
features = ["Sex", "Parch", "SibSp", "Pclass", "Fare"]
test_data_final = test_data[features]
test_data_final.Sex[test_data_final.Sex == "male"] = 0
test_data_final.Sex[test_data_final.Sex == "female"] = 1
cols = ["Sex", "Parch", "SibSp", "Pclass", "Fare"]
for col in cols:
test_data_final[col].fillna(int(test_data_final[col].mean()), inplace=True)
result = Classifier.predict(test_data_final)
test_data_final["Survived"] = result
test_data_final
# # Submission
submission = pd.concat(
[test_data["PassengerId"], test_data_final["Survived"]], axis=1
).reset_index(drop=True)
submission
submission.to_csv("submission.csv", index=False)
| false | 0 | 1,365 | 0 | 1,365 | 1,365 |
||
129601646
|
<jupyter_start><jupyter_text>Rice Image Dataset
Rice Image Dataset
DATASET: https://www.muratkoklu.com/datasets/
Citation Request: See the articles for more detailed information on the data.
Koklu, M., Cinar, I., & Taspinar, Y. S. (2021). Classification of rice varieties with deep learning methods. Computers and Electronics in Agriculture, 187, 106285. https://doi.org/10.1016/j.compag.2021.106285
Cinar, I., & Koklu, M. (2021). Determination of Effective and Specific Physical Features of Rice Varieties by Computer Vision In Exterior Quality Inspection. Selcuk Journal of Agriculture and Food Sciences, 35(3), 229-243. https://doi.org/10.15316/SJAFS.2021.252
Cinar, I., & Koklu, M. (2022). Identification of Rice Varieties Using Machine Learning Algorithms. Journal of Agricultural Sciences https://doi.org/10.15832/ankutbd.862482
Cinar, I., & Koklu, M. (2019). Classification of Rice Varieties Using Artificial Intelligence Methods. International Journal of Intelligent Systems and Applications in Engineering, 7(3), 188-194. https://doi.org/10.18201/ijisae.2019355381
DATASET: https://www.muratkoklu.com/datasets/
Highlights
• Arborio, Basmati, Ipsala, Jasmine and Karacadag rice varieties were used.
• The dataset (1) has 75K images including 15K pieces from each rice variety. The dataset (2) has 12 morphological, 4 shape and 90 color features.
• ANN, DNN and CNN models were used to classify rice varieties.
• Classified with an accuracy rate of 100% through the CNN model created.
• The models used achieved successful results in the classification of rice varieties.
Abstract
Rice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These are usually features such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds. In this study, Arborio, Basmati, Ipsala, Jasmine and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000 grain images, 15,000 from each of these varieties, are included in the dataset. A second dataset with 106 features including 12 morphological, 4 shape and 90 color features obtained from these images was used. Models were created by using Artificial Neural Network (ANN) and Deep Neural Network (DNN) algorithms for the feature dataset and by using the Convolutional Neural Network (CNN) algorithm for the image dataset, and classification processes were performed. Statistical results of sensitivity, specificity, prediction, F1 score, accuracy, false positive rate and false negative rate were calculated using the confusion matrix values of the models and the results of each model were given in tables. Classification successes from the models were achieved as 99.87% for ANN, 99.95% for DNN and 100% for CNN. With the results, it is seen that the models used in the study in the classification of rice varieties can be applied successfully in this field.
Kaggle dataset identifier: rice-image-dataset
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
train = tf.keras.utils.image_dataset_from_directory(
"/kaggle/input/rice-image-dataset/Rice_Image_Dataset",
color_mode="rgb",
label_mode="categorical",
batch_size=32,
seed=12,
subset="training",
validation_split=0.2,
image_size=(256, 256),
)
test = tf.keras.utils.image_dataset_from_directory(
"/kaggle/input/rice-image-dataset/Rice_Image_Dataset",
image_size=(256, 256),
batch_size=32,
seed=12,
color_mode="rgb",
label_mode="categorical",
subset="validation",
validation_split=0.2,
)
tf.random.set_seed(42)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(15, 5, activation="relu", input_shape=(256, 256, 3)),
tf.keras.layers.Conv2D(15, 5, activation="relu"),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(8, 3, activation="relu"),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(5, activation="softmax"),
]
)
model.compile(
loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"],
)
model.summary()
history = model.fit(
train,
validation_data=test,
epochs=5,
steps_per_epoch=len(train),
validation_steps=len(test),
)
model.save("rice-classification")
model.save("rice_classification.h5")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/601/129601646.ipynb
|
rice-image-dataset
|
muratkokludataset
|
[{"Id": 129601646, "ScriptId": 38342336, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9671617, "CreationDate": "05/15/2023 06:57:15", "VersionNumber": 1.0, "Title": "Research-RiceGrain", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 70.0, "LinesInsertedFromPrevious": 70.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 185836011, "KernelVersionId": 129601646, "SourceDatasetVersionId": 3399185}]
|
[{"Id": 3399185, "DatasetId": 2049052, "DatasourceVersionId": 3450858, "CreatorUserId": 10072866, "LicenseName": "CC0: Public Domain", "CreationDate": "04/03/2022 02:12:00", "VersionNumber": 1.0, "Title": "Rice Image Dataset", "Slug": "rice-image-dataset", "Subtitle": "Five different Rice Image Dataset. Arborio, Basmati, Ipsala, Jasmine, Karacadag", "Description": "Rice Image Dataset\nDATASET: https://www.muratkoklu.com/datasets/\n\nCitation Request: See the articles for more detailed information on the data.\n\nKoklu, M., Cinar, I., & Taspinar, Y. S. (2021). Classification of rice varieties with deep learning methods. Computers and Electronics in Agriculture, 187, 106285. https://doi.org/10.1016/j.compag.2021.106285\n\nCinar, I., & Koklu, M. (2021). Determination of Effective and Specific Physical Features of Rice Varieties by Computer Vision In Exterior Quality Inspection. Selcuk Journal of Agriculture and Food Sciences, 35(3), 229-243. https://doi.org/10.15316/SJAFS.2021.252\n\nCinar, I., & Koklu, M. (2022). Identification of Rice Varieties Using Machine Learning Algorithms. Journal of Agricultural Sciences https://doi.org/10.15832/ankutbd.862482\n\nCinar, I., & Koklu, M. (2019). Classification of Rice Varieties Using Artificial Intelligence Methods. International Journal of Intelligent Systems and Applications in Engineering, 7(3), 188-194. https://doi.org/10.18201/ijisae.2019355381\n\nDATASET: https://www.muratkoklu.com/datasets/\n\nHighlights\n\u2022 Arborio, Basmati, Ipsala, Jasmine and Karacadag rice varieties were used.\n\u2022 The dataset (1) has 75K images including 15K pieces from each rice variety. The dataset (2) has 12 morphological, 4 shape and 90 color features.\n\u2022 ANN, DNN and CNN models were used to classify rice varieties.\n\u2022 Classified with an accuracy rate of 100% through the CNN model created.\n\u2022 The models used achieved successful results in the classification of rice varieties.\n\nAbstract\nRice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These are usually features such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds. In this study, Arborio, Basmati, Ipsala, Jasmine and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000 grain images, 15,000 from each of these varieties, are included in the dataset. A second dataset with 106 features including 12 morphological, 4 shape and 90 color features obtained from these images was used. Models were created by using Artificial Neural Network (ANN) and Deep Neural Network (DNN) algorithms for the feature dataset and by using the Convolutional Neural Network (CNN) algorithm for the image dataset, and classification processes were performed. Statistical results of sensitivity, specificity, prediction, F1 score, accuracy, false positive rate and false negative rate were calculated using the confusion matrix values of the models and the results of each model were given in tables. Classification successes from the models were achieved as 99.87% for ANN, 99.95% for DNN and 100% for CNN. With the results, it is seen that the models used in the study in the classification of rice varieties can be applied successfully in this field.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2049052, "CreatorUserId": 10072866, "OwnerUserId": 10072866.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3399185.0, "CurrentDatasourceVersionId": 3450858.0, "ForumId": 2074065, "Type": 2, "CreationDate": "04/03/2022 02:12:00", "LastActivityDate": "04/03/2022", "TotalViews": 113676, "TotalDownloads": 11463, "TotalVotes": 1718, "TotalKernels": 236}]
|
[{"Id": 10072866, "UserName": "muratkokludataset", "DisplayName": "Murat KOKLU", "RegisterDate": "03/28/2022", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import tensorflow as tf
train = tf.keras.utils.image_dataset_from_directory(
"/kaggle/input/rice-image-dataset/Rice_Image_Dataset",
color_mode="rgb",
label_mode="categorical",
batch_size=32,
seed=12,
subset="training",
validation_split=0.2,
image_size=(256, 256),
)
test = tf.keras.utils.image_dataset_from_directory(
"/kaggle/input/rice-image-dataset/Rice_Image_Dataset",
image_size=(256, 256),
batch_size=32,
seed=12,
color_mode="rgb",
label_mode="categorical",
subset="validation",
validation_split=0.2,
)
tf.random.set_seed(42)
model = tf.keras.models.Sequential(
[
tf.keras.layers.Conv2D(15, 5, activation="relu", input_shape=(256, 256, 3)),
tf.keras.layers.Conv2D(15, 5, activation="relu"),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(10, 3, activation="relu"),
tf.keras.layers.Conv2D(8, 3, activation="relu"),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(5, activation="softmax"),
]
)
model.compile(
loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"],
)
model.summary()
history = model.fit(
train,
validation_data=test,
epochs=5,
steps_per_epoch=len(train),
validation_steps=len(test),
)
model.save("rice-classification")
model.save("rice_classification.h5")
| false | 0 | 660 | 2 | 1,582 | 660 |
||
129795477
|
# # **⚔️ SQLite vs Apache Spark**
# ### The comparison between **SQLite and Apache Spark** with python in **small data and big data** and their behavior for each situation.
# 
# > 📚 ref.:
# > 1. https://www.kaggle.com/code/leodaniel/sql-quick-guide
# > 2. https://sqlite.org/about.html
# > 3. https://spark.apache.org/
# In this notebook, we will **compare the data processing capabilities of SQLite and Apache Spark**. We will operations performed are **Create, Read, Update, and Delete** (CRUD operations), which are common tasks in data handling. We will also **analyze the performance of both systems by measuring the time it takes to complete operations**.
# # **❗️ Important**
# This **notebook was based** on the [notebook](https://www.kaggle.com/code/leodaniel/sql-quick-guide), for your **better understanding of SQL I recommend that you first see this notebook where important and didactic approaches are made about CRUD**. This notebook also **uses the SQL code used in the** [notebook](https://www.kaggle.com/code/leodaniel/sql-quick-guide), **making it even easier for those who have already read the base notebook to understand**.
# **Here we will highlight the comparison between SQLite and Apache Spark with python in small data volumes and large data volumes and their behavior for each situation**.
# # **SQLite**
# SQLite is a public domain, **SQL database engine with zero-configuration**. It stands out due to its ubiquity across applications and platforms, including several high-profile projects. Unlike other SQL databases, **SQLite does not rely on a separate server process but reads and writes directly to ordinary disk files**, with the entire database contained in a single cross-platform file. Its compact nature and performance, even in low-memory environments, make it a preferred choice for an Application File Format.Despite **its small size, SQLite maintains impressive performance and reliability standards**.
# # **Apache Spark**
# Apache Spark is an open source distributed computing system used for **big data processing and analysis**. It provides an interface for programming entire clusters with **implicit data parallelism and fault tolerance**. **PySpark is the Python library** for Spark.
# **PySpark** offers a high-level API for distributed data processing and **enables data analysis in conjunction with popular Python libraries such as NumPy and pandas**. It provides support for many Spark features, including Spark **SQL for working with structured data**. It also **utilizes in-memory processing resources to improve the performance of big data analytics applications**. It allows the user to interact with **Python RDDs**, which are a core component of the Spark architecture, **providing data immutability and the ability to work across distributed systems**.
# # **1. Install dependencies and import libraries**
import time
import pandas as pd
import numpy as np
import sqlite3
from tqdm import tqdm
import random
import string
from IPython.display import Markdown
import plotly.graph_objects as go
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, when, isnan, lit, broadcast
from pyspark.sql.types import StringType
from pyspark.storagelevel import StorageLevel
# # **2. Struct construction for SQLite operation**
# **DatabaseManager**: This class manages interactions with a SQLite database. It can execute SQL commands, queries, and multiple commands in sequence.
class DatabaseManager:
def __init__(self, db_connection):
self.conn = db_connection
def execute_command(self, sql_command):
cursor = self.conn.cursor()
cursor.execute(sql_command)
self.conn.commit()
def execute_query(self, query):
cursor = self.conn.cursor()
cursor.execute(query)
result = cursor.fetchall()
column_names = [column[0] for column in cursor.description]
df = pd.DataFrame(columns=column_names, data=result)
cursor.close()
return df
def execute_many_commands(self, sql_commands):
cursor = self.conn.cursor()
for comm in tqdm(sql_commands):
print(comm)
cursor.execute(comm)
self.conn.commit()
# **DataFrameDescriber**: This class provides a static method to generate a description of a Pandas dataframe, including data types, number of unique values, examples of values, number of null values (and percentage), and statistical summary (if numeric).
class DataFrameDescriber:
@staticmethod
def describe_dataframe(df):
"""
Generate a description of a Pandas dataframe, including data types, number of unique values,
examples of values, number of null values (and percentage), and statistical summary (if numeric).
Parameters:
df (pandas.DataFrame): The dataframe.
Returns:
desc: A dataframe containing the description of the dataframe.
"""
# Initialize the description dictionary
description = {}
# Iterate over the columns of the dataframe
for col_name, dtype in df.dtypes.items():
# Get the number of unique values and some examples of values
num_unique = df[col_name].nunique()
examples = df[col_name].value_counts().index[:3].tolist()
# Check if the column is numeric
is_numeric = pd.api.types.is_numeric_dtype(dtype)
# Get the number of null values and percentage
num_null = df[col_name].isnull().sum()
percent_null = np.round(num_null / len(df) * 100, 2)
# IPassengerIdnitialize the column description dictionary
col_description = {
"col_name": col_name,
"data_type": dtype,
"num_unique": num_unique,
"examples": examples,
"num_null": num_null,
"percent_null": percent_null,
}
# If the column is numeric, add statistical summary to the description
if is_numeric:
col_description.update(
{
"min": np.round(df[col_name].min(), 2),
"max": np.round(df[col_name].max(), 2),
"mean": np.round(df[col_name].mean(), 2),
}
)
# Add the column description to the overall description
description[col_name] = col_description
return pd.DataFrame(description).T.fillna("")
# **SqlCreator**: This class provides static methods to generate SQL commands, like creating a table based on a Pandas dataframe and executing a select query.
class SqlCreator:
@staticmethod
def get_create_table_command(df, table_name):
"""
Generate a CREATE TABLE command for SQLite based on a Pandas dataframe.
Include the NOT NULL constraint for columns that do not have null values.
Include the PRIMARY KEY constraint for columns called "id" that do not have null values.
Parameters:
df (pandas.DataFrame): The dataframe.
table_name (str): The name of the table.
Returns:
str: The CREATE TABLE command.
"""
# Get the column names and data types from the dataframe
columns = []
for col_name, dtype in df.dtypes.items():
# Convert the data type to a SQLite data type
if dtype == "int64":
sqlite_type = "INTEGER"
elif dtype == "float64":
sqlite_type = "REAL"
else:
sqlite_type = "TEXT"
# Check if the column has null values
has_null = df[col_name].isnull().any()
# Add the NOT NULL constraint if the column does not have null values
if not has_null:
sqlite_type += " NOT NULL"
# Add the PRIMARY KEY constraint if the column is called "id" and does not have null values
if col_name == "Id" and not has_null:
sqlite_type += " PRIMARY KEY"
# Add the column name and data type to the list
columns.append(f"\t{col_name} {sqlite_type}")
# Join the column definitions into a single string
column_defs = ", \n".join(columns)
# Return the CREATE TABLE command
return f"CREATE TABLE {table_name} (\n{column_defs}\n)"
def select_data(database_manager, query):
# Just execute the query
return database_manager.execute_query(query)
# **StringDataframeConverter**: This class provides a static method to convert a string with a dataset into a Pandas dataframe.
class StringDataframeConverter:
@staticmethod
def string_to_dataframe(data):
"""
Convert a string with a dataset into a Pandas dataframe.
The first column is the id and the second column is the description.
Parameters:
data (str): The dataset as a string.
Returns:
pandas.DataFrame: The dataframe.
"""
# Initialize an empty dictionary
dict_data = {"Id": [], "description": []}
# Split the data into lines
lines = data.strip().split("\n")
# Iterate over the lines
for line in lines:
# Split the line into its two parts (id and description)
_id, description = line.split("\t")
# Add the id and description to the dictionary
dict_data["Id"].append(_id)
dict_data["description"].append(description)
# Create a dataframe from the dictionary
df = pd.DataFrame(dict_data)
# Return the dataframe
return df
# **DataFrameToSql**: This class provides a method to convert a Pandas dataframe into a SQLite database table.
class DataFrameToSql:
def __init__(self, database_manager: DatabaseManager):
self.dbm = database_manager
def data_to_sql(self, df, table_name):
df.to_sql(table_name, self.dbm.conn, if_exists="replace", index=False)
# The **oparation_SQlite_and_pandas()** function performs data operations in SQLite and Pandas:
# - Connects to an SQLite database and loads a dataset.
# - Generates a description of the dataframe and creates a table in the SQLite database.
# - Modifies the dataset, creates another table, changes the first table, and inserts data into the tables.
# - Performs CRUD operations on tables (select, update and delete commands).
# - Calculates and returns the time spent on these operations to be used in subsequent operations.
def oparation_SQlite_and_pandas(path):
# Perform operation in pandas and SQLite
start_time = time.time()
# Connect to an in-memory database
conn = sqlite3.connect(":memory:")
database_manager = DatabaseManager(conn)
# Load the data
titanic = pd.read_csv(path)
# Describe the dataframe
describer = DataFrameDescriber()
desc = describer.describe_dataframe(titanic)
# Create the SQL commands
creator = SqlCreator()
create_table = creator.get_create_table_command(
titanic.drop(columns=["Embarked"]), "tb_titanic"
)
# Execute the commands
database_manager.execute_command(create_table)
# Convert the string to a dataframe
converter = StringDataframeConverter()
s = """
C\tCherbourg
Q\tQueenstown
S\tSouthampton
"""
port_of_embarkation = converter.string_to_dataframe(s)
# Create the table
create_table = creator.get_create_table_command(
port_of_embarkation, "tb_port_embarkation"
)
database_manager.execute_command(create_table)
# Alter the table
alter_table = "ALTER TABLE tb_titanic ADD COLUMN Embarked TEXT REFERENCES tb_port_embarkation(Id)"
database_manager.execute_command(alter_table)
# Insert the data into the table
data_to_sql = DataFrameToSql(database_manager)
data_to_sql.data_to_sql(port_of_embarkation, "tb_port_embarkation")
# ----------------------------------------------------------
# The select command (The C of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The select command (The C of CRUD) ---\n")
# Get the column names
column_names = ",".join(port_of_embarkation.columns)
# Build the insert commands
insert_commands = [
f"INSERT INTO tb_port_embarkation ({column_names}) VALUES {tuple(values)}"
for values in port_of_embarkation.values
]
# Execute many commands
database_manager.execute_many_commands(insert_commands)
# Insert the data into the table
data_to_sql.data_to_sql(titanic, "tb_titanic")
# ----------------------------------------------------------
# The select command (The R of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The select command (The R of CRUD) ---\n")
# Query select all data
query = """
SELECT *
FROM tb_titanic;
"""
s = f"""
```sql
{query}
```
"""
display(Markdown(s))
print("Check shape: ")
display(SqlCreator.select_data(database_manager, query).shape)
# ----------------------------------------------------------
# The update command (The U of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The update command (The U of CRUD) ---\n")
sql_command = """
UPDATE tb_titanic
SET Age = 27
WHERE Sex='female' or Age is Null;
"""
s = f"""
```sql
{sql_command}
```
"""
display(Markdown(s))
# run update
database_manager.execute_command(sql_command)
# show results update
query = """
SELECT Sex, AVG(Age)
FROM tb_titanic
GROUP BY Sex;
"""
print("Check results update: ")
display(SqlCreator.select_data(database_manager, query).head())
# ----------------------------------------------------------
# The delete command (The D of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The delete command (The D of CRUD) ---\n")
sql_command = """
DELETE from tb_titanic
WHERE Embarked is Null;
"""
s = f"""
```sql
{sql_command}
```
"""
display(Markdown(s))
# run delete
database_manager.execute_command(sql_command)
# show results shape after delete
query = """
SELECT *
FROM tb_titanic;
"""
s = f"""
```sql
{query}
```
"""
print("Check shape after delete: ")
display(SqlCreator.select_data(database_manager, query).shape)
sqlite_time = time.time() - start_time
return sqlite_time
# The **operation_spark()** function performs similar data operations in Spark:
# - Initializes a SparkSession and loads a dataset.
# - Creates a new dataframe and joins it with the original dataframe.
# - Performs the CRUD operations on the dataframe (show, update, and filter actions).
# - Calculates and returns the time spent on these operations to be used in subsequent operations.
def operation_spark(path):
# Perform operation in Spark
start_time = time.time()
# Initialize a SparkSession
spark = SparkSession.builder.appName("titanic_analysis").getOrCreate()
# Load the data
titanic = spark.read.csv(path, header=True, inferSchema=True)
# ----------------------------------------------------------
# (equivalent spark) The select command (The C of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The select command (The C of CRUD) ---\n"
)
# Check original shape
print((titanic.count(), len(titanic.columns)))
# Show the schema (equivalent to create table command in SQL)
titanic.printSchema()
# Convert the string to a dataframe
data = [("C", "Cherbourg"), ("Q", "Queenstown"), ("S", "Southampton")]
port_of_embarkation = spark.createDataFrame(data, ["Id", "description"])
# Join the dataframes (equivalent to alter table and insert data in SQL)
titanic = titanic.join(
broadcast(port_of_embarkation),
titanic.Embarked == port_of_embarkation.Id,
"left_outer",
)
# Cache the dataframe if it's used multiple times and not already cached
if not titanic.storageLevel != StorageLevel(False, True, False, False, 1):
titanic.cache()
# ----------------------------------------------------------
# (equivalent spark) The select command (The R of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The select command (The R of CRUD) ---\n"
)
# Show the data (equivalent to select command in SQL)
titanic.show()
# ----------------------------------------------------------
# (equivalent spark) The update command (The U of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The update command (The U of CRUD) ---\n"
)
# Update the data (equivalent to update command in SQL)
titanic = titanic.withColumn(
"Age",
when((col("Sex") == "female") | (col("Age").isNull()), 27).otherwise(
col("Age")
),
)
# Check results update
print("Check results update: ")
titanic.groupBy("Sex").agg(avg("Age")).show()
# ----------------------------------------------------------
# (equivalent spark) The delete command (The D of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The delete command (The D of CRUD) ---\n"
)
# Delete the data (equivalent to delete command in SQL)
titanic = titanic.filter(col("Embarked").isNotNull())
# Check shape after delete
print("Check shape after delete: ")
print((titanic.count(), len(titanic.columns)))
spark_time = time.time() - start_time
return spark_time
# # **3. Comparing SQLite and Spark**
# ## 📢 Comparing SQLite and Spark Performance with **small data volume** (**we will perform 3x the same operations**)
original_data = "/kaggle/input/titanic/train.csv"
sqlite_time_first_try = []
spark_time_first_try = []
for i in range(3):
sqlite_time_first_try.append(oparation_SQlite_and_pandas(original_data))
spark_time_first_try.append(operation_spark(original_data))
# ## 📢 Comparing SQLite and Spark Performance with **larger data volume** (**we will perform 3x the same operations**)
# **👁🗨 Obs: So that we can get more data we will confirm the existing data 1000x with function increase_dataframe**
def increase_dataframe(df, n):
return pd.concat([df] * n, ignore_index=True)
# increase dataframe and save
titanic = pd.read_csv(original_data)
titanic_large = increase_dataframe(titanic, 1000) # repeat 1000 times
titanic_large.to_csv("titanic_large.csv", index=False)
titanic_large = pd.read_csv("/kaggle/working/titanic_large.csv")
print("New shape: " + str(titanic_large.shape))
# ## **🏁 Let's comparation again**
titanic_large = "/kaggle/working/titanic_large.csv"
sqlite_time_second_try = []
spark_time_second_try = []
for i in range(3):
sqlite_time_second_try.append(oparation_SQlite_and_pandas(titanic_large))
spark_time_second_try.append(operation_spark(titanic_large))
# # **4. Analysis of results**
# ## ✔️ Checking of results with **small data**
for i in range(3):
print(f"Attempt:" + str(i + 1))
print(f"SQLite processing time: {sqlite_time_first_try[i]} seconds")
print(f"Spark processing time: {spark_time_first_try[i]} seconds\n")
# We can see that with a **small data** **SQLite has a big advantage over Spark**, especially in the **first attempt where spark uses it to store the data that will be used in memory**. We can see that after the **second attempt, spark has a drastic drop in performance**, this is because the data **is already saved in the chache, managing to make its calls faster**, we also observed that in **every attempt SQLite remains stable** with its time rate always very close.
# So in this case with a **small volume of data, the best thing to use would be SQLite**.
# ## ✔️ Checking results with **big data**
for i in range(3):
print(f"Attempt:" + str(i + 1))
print(f"SQLite processing time: {sqlite_time_second_try[i]} seconds")
print(f"Spark processing time: {spark_time_second_try[i]} seconds\n")
# With a **large volume of data** now we can see how **Spark outperforms SQLite**, having a **better performance from the second try while SQLite remains stable over time**.
# So, in cases with **big data, Spark performs more than 2x better than SQLite**.
# # **5. Conclusion**
# Create traces
trace1 = go.Box(
y=sqlite_time_first_try, name="SQLite First Try", marker_color="#3D9970"
)
trace2 = go.Box(y=spark_time_first_try, name="Spark First Try", marker_color="#FF4136")
trace3 = go.Box(
y=sqlite_time_second_try, name="SQLite Second Try", marker_color="#FF851B"
)
trace4 = go.Box(
y=spark_time_second_try, name="Spark Second Try", marker_color="#0074D9"
)
data = [trace1, trace2, trace3, trace4]
layout = go.Layout(
title="Performance Comparison Between SQLite and Spark",
yaxis=dict(title="Time (seconds)"),
boxmode="group", # group together boxes of the different traces for each value of x
)
fig = go.Figure(data=data, layout=layout)
fig.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/795/129795477.ipynb
| null | null |
[{"Id": 129795477, "ScriptId": 38601705, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11108772, "CreationDate": "05/16/2023 14:14:36", "VersionNumber": 1.0, "Title": "SQLite vs Apache Spark", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 538.0, "LinesInsertedFromPrevious": 538.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
| null | null | null | null |
# # **⚔️ SQLite vs Apache Spark**
# ### The comparison between **SQLite and Apache Spark** with python in **small data and big data** and their behavior for each situation.
# 
# > 📚 ref.:
# > 1. https://www.kaggle.com/code/leodaniel/sql-quick-guide
# > 2. https://sqlite.org/about.html
# > 3. https://spark.apache.org/
# In this notebook, we will **compare the data processing capabilities of SQLite and Apache Spark**. We will operations performed are **Create, Read, Update, and Delete** (CRUD operations), which are common tasks in data handling. We will also **analyze the performance of both systems by measuring the time it takes to complete operations**.
# # **❗️ Important**
# This **notebook was based** on the [notebook](https://www.kaggle.com/code/leodaniel/sql-quick-guide), for your **better understanding of SQL I recommend that you first see this notebook where important and didactic approaches are made about CRUD**. This notebook also **uses the SQL code used in the** [notebook](https://www.kaggle.com/code/leodaniel/sql-quick-guide), **making it even easier for those who have already read the base notebook to understand**.
# **Here we will highlight the comparison between SQLite and Apache Spark with python in small data volumes and large data volumes and their behavior for each situation**.
# # **SQLite**
# SQLite is a public domain, **SQL database engine with zero-configuration**. It stands out due to its ubiquity across applications and platforms, including several high-profile projects. Unlike other SQL databases, **SQLite does not rely on a separate server process but reads and writes directly to ordinary disk files**, with the entire database contained in a single cross-platform file. Its compact nature and performance, even in low-memory environments, make it a preferred choice for an Application File Format.Despite **its small size, SQLite maintains impressive performance and reliability standards**.
# # **Apache Spark**
# Apache Spark is an open source distributed computing system used for **big data processing and analysis**. It provides an interface for programming entire clusters with **implicit data parallelism and fault tolerance**. **PySpark is the Python library** for Spark.
# **PySpark** offers a high-level API for distributed data processing and **enables data analysis in conjunction with popular Python libraries such as NumPy and pandas**. It provides support for many Spark features, including Spark **SQL for working with structured data**. It also **utilizes in-memory processing resources to improve the performance of big data analytics applications**. It allows the user to interact with **Python RDDs**, which are a core component of the Spark architecture, **providing data immutability and the ability to work across distributed systems**.
# # **1. Install dependencies and import libraries**
import time
import pandas as pd
import numpy as np
import sqlite3
from tqdm import tqdm
import random
import string
from IPython.display import Markdown
import plotly.graph_objects as go
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, when, isnan, lit, broadcast
from pyspark.sql.types import StringType
from pyspark.storagelevel import StorageLevel
# # **2. Struct construction for SQLite operation**
# **DatabaseManager**: This class manages interactions with a SQLite database. It can execute SQL commands, queries, and multiple commands in sequence.
class DatabaseManager:
def __init__(self, db_connection):
self.conn = db_connection
def execute_command(self, sql_command):
cursor = self.conn.cursor()
cursor.execute(sql_command)
self.conn.commit()
def execute_query(self, query):
cursor = self.conn.cursor()
cursor.execute(query)
result = cursor.fetchall()
column_names = [column[0] for column in cursor.description]
df = pd.DataFrame(columns=column_names, data=result)
cursor.close()
return df
def execute_many_commands(self, sql_commands):
cursor = self.conn.cursor()
for comm in tqdm(sql_commands):
print(comm)
cursor.execute(comm)
self.conn.commit()
# **DataFrameDescriber**: This class provides a static method to generate a description of a Pandas dataframe, including data types, number of unique values, examples of values, number of null values (and percentage), and statistical summary (if numeric).
class DataFrameDescriber:
@staticmethod
def describe_dataframe(df):
"""
Generate a description of a Pandas dataframe, including data types, number of unique values,
examples of values, number of null values (and percentage), and statistical summary (if numeric).
Parameters:
df (pandas.DataFrame): The dataframe.
Returns:
desc: A dataframe containing the description of the dataframe.
"""
# Initialize the description dictionary
description = {}
# Iterate over the columns of the dataframe
for col_name, dtype in df.dtypes.items():
# Get the number of unique values and some examples of values
num_unique = df[col_name].nunique()
examples = df[col_name].value_counts().index[:3].tolist()
# Check if the column is numeric
is_numeric = pd.api.types.is_numeric_dtype(dtype)
# Get the number of null values and percentage
num_null = df[col_name].isnull().sum()
percent_null = np.round(num_null / len(df) * 100, 2)
# IPassengerIdnitialize the column description dictionary
col_description = {
"col_name": col_name,
"data_type": dtype,
"num_unique": num_unique,
"examples": examples,
"num_null": num_null,
"percent_null": percent_null,
}
# If the column is numeric, add statistical summary to the description
if is_numeric:
col_description.update(
{
"min": np.round(df[col_name].min(), 2),
"max": np.round(df[col_name].max(), 2),
"mean": np.round(df[col_name].mean(), 2),
}
)
# Add the column description to the overall description
description[col_name] = col_description
return pd.DataFrame(description).T.fillna("")
# **SqlCreator**: This class provides static methods to generate SQL commands, like creating a table based on a Pandas dataframe and executing a select query.
class SqlCreator:
@staticmethod
def get_create_table_command(df, table_name):
"""
Generate a CREATE TABLE command for SQLite based on a Pandas dataframe.
Include the NOT NULL constraint for columns that do not have null values.
Include the PRIMARY KEY constraint for columns called "id" that do not have null values.
Parameters:
df (pandas.DataFrame): The dataframe.
table_name (str): The name of the table.
Returns:
str: The CREATE TABLE command.
"""
# Get the column names and data types from the dataframe
columns = []
for col_name, dtype in df.dtypes.items():
# Convert the data type to a SQLite data type
if dtype == "int64":
sqlite_type = "INTEGER"
elif dtype == "float64":
sqlite_type = "REAL"
else:
sqlite_type = "TEXT"
# Check if the column has null values
has_null = df[col_name].isnull().any()
# Add the NOT NULL constraint if the column does not have null values
if not has_null:
sqlite_type += " NOT NULL"
# Add the PRIMARY KEY constraint if the column is called "id" and does not have null values
if col_name == "Id" and not has_null:
sqlite_type += " PRIMARY KEY"
# Add the column name and data type to the list
columns.append(f"\t{col_name} {sqlite_type}")
# Join the column definitions into a single string
column_defs = ", \n".join(columns)
# Return the CREATE TABLE command
return f"CREATE TABLE {table_name} (\n{column_defs}\n)"
def select_data(database_manager, query):
# Just execute the query
return database_manager.execute_query(query)
# **StringDataframeConverter**: This class provides a static method to convert a string with a dataset into a Pandas dataframe.
class StringDataframeConverter:
@staticmethod
def string_to_dataframe(data):
"""
Convert a string with a dataset into a Pandas dataframe.
The first column is the id and the second column is the description.
Parameters:
data (str): The dataset as a string.
Returns:
pandas.DataFrame: The dataframe.
"""
# Initialize an empty dictionary
dict_data = {"Id": [], "description": []}
# Split the data into lines
lines = data.strip().split("\n")
# Iterate over the lines
for line in lines:
# Split the line into its two parts (id and description)
_id, description = line.split("\t")
# Add the id and description to the dictionary
dict_data["Id"].append(_id)
dict_data["description"].append(description)
# Create a dataframe from the dictionary
df = pd.DataFrame(dict_data)
# Return the dataframe
return df
# **DataFrameToSql**: This class provides a method to convert a Pandas dataframe into a SQLite database table.
class DataFrameToSql:
def __init__(self, database_manager: DatabaseManager):
self.dbm = database_manager
def data_to_sql(self, df, table_name):
df.to_sql(table_name, self.dbm.conn, if_exists="replace", index=False)
# The **oparation_SQlite_and_pandas()** function performs data operations in SQLite and Pandas:
# - Connects to an SQLite database and loads a dataset.
# - Generates a description of the dataframe and creates a table in the SQLite database.
# - Modifies the dataset, creates another table, changes the first table, and inserts data into the tables.
# - Performs CRUD operations on tables (select, update and delete commands).
# - Calculates and returns the time spent on these operations to be used in subsequent operations.
def oparation_SQlite_and_pandas(path):
# Perform operation in pandas and SQLite
start_time = time.time()
# Connect to an in-memory database
conn = sqlite3.connect(":memory:")
database_manager = DatabaseManager(conn)
# Load the data
titanic = pd.read_csv(path)
# Describe the dataframe
describer = DataFrameDescriber()
desc = describer.describe_dataframe(titanic)
# Create the SQL commands
creator = SqlCreator()
create_table = creator.get_create_table_command(
titanic.drop(columns=["Embarked"]), "tb_titanic"
)
# Execute the commands
database_manager.execute_command(create_table)
# Convert the string to a dataframe
converter = StringDataframeConverter()
s = """
C\tCherbourg
Q\tQueenstown
S\tSouthampton
"""
port_of_embarkation = converter.string_to_dataframe(s)
# Create the table
create_table = creator.get_create_table_command(
port_of_embarkation, "tb_port_embarkation"
)
database_manager.execute_command(create_table)
# Alter the table
alter_table = "ALTER TABLE tb_titanic ADD COLUMN Embarked TEXT REFERENCES tb_port_embarkation(Id)"
database_manager.execute_command(alter_table)
# Insert the data into the table
data_to_sql = DataFrameToSql(database_manager)
data_to_sql.data_to_sql(port_of_embarkation, "tb_port_embarkation")
# ----------------------------------------------------------
# The select command (The C of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The select command (The C of CRUD) ---\n")
# Get the column names
column_names = ",".join(port_of_embarkation.columns)
# Build the insert commands
insert_commands = [
f"INSERT INTO tb_port_embarkation ({column_names}) VALUES {tuple(values)}"
for values in port_of_embarkation.values
]
# Execute many commands
database_manager.execute_many_commands(insert_commands)
# Insert the data into the table
data_to_sql.data_to_sql(titanic, "tb_titanic")
# ----------------------------------------------------------
# The select command (The R of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The select command (The R of CRUD) ---\n")
# Query select all data
query = """
SELECT *
FROM tb_titanic;
"""
s = f"""
```sql
{query}
```
"""
display(Markdown(s))
print("Check shape: ")
display(SqlCreator.select_data(database_manager, query).shape)
# ----------------------------------------------------------
# The update command (The U of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The update command (The U of CRUD) ---\n")
sql_command = """
UPDATE tb_titanic
SET Age = 27
WHERE Sex='female' or Age is Null;
"""
s = f"""
```sql
{sql_command}
```
"""
display(Markdown(s))
# run update
database_manager.execute_command(sql_command)
# show results update
query = """
SELECT Sex, AVG(Age)
FROM tb_titanic
GROUP BY Sex;
"""
print("Check results update: ")
display(SqlCreator.select_data(database_manager, query).head())
# ----------------------------------------------------------
# The delete command (The D of CRUD)
# ----------------------------------------------------------
print("\033[1m" + "\n--- The delete command (The D of CRUD) ---\n")
sql_command = """
DELETE from tb_titanic
WHERE Embarked is Null;
"""
s = f"""
```sql
{sql_command}
```
"""
display(Markdown(s))
# run delete
database_manager.execute_command(sql_command)
# show results shape after delete
query = """
SELECT *
FROM tb_titanic;
"""
s = f"""
```sql
{query}
```
"""
print("Check shape after delete: ")
display(SqlCreator.select_data(database_manager, query).shape)
sqlite_time = time.time() - start_time
return sqlite_time
# The **operation_spark()** function performs similar data operations in Spark:
# - Initializes a SparkSession and loads a dataset.
# - Creates a new dataframe and joins it with the original dataframe.
# - Performs the CRUD operations on the dataframe (show, update, and filter actions).
# - Calculates and returns the time spent on these operations to be used in subsequent operations.
def operation_spark(path):
# Perform operation in Spark
start_time = time.time()
# Initialize a SparkSession
spark = SparkSession.builder.appName("titanic_analysis").getOrCreate()
# Load the data
titanic = spark.read.csv(path, header=True, inferSchema=True)
# ----------------------------------------------------------
# (equivalent spark) The select command (The C of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The select command (The C of CRUD) ---\n"
)
# Check original shape
print((titanic.count(), len(titanic.columns)))
# Show the schema (equivalent to create table command in SQL)
titanic.printSchema()
# Convert the string to a dataframe
data = [("C", "Cherbourg"), ("Q", "Queenstown"), ("S", "Southampton")]
port_of_embarkation = spark.createDataFrame(data, ["Id", "description"])
# Join the dataframes (equivalent to alter table and insert data in SQL)
titanic = titanic.join(
broadcast(port_of_embarkation),
titanic.Embarked == port_of_embarkation.Id,
"left_outer",
)
# Cache the dataframe if it's used multiple times and not already cached
if not titanic.storageLevel != StorageLevel(False, True, False, False, 1):
titanic.cache()
# ----------------------------------------------------------
# (equivalent spark) The select command (The R of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The select command (The R of CRUD) ---\n"
)
# Show the data (equivalent to select command in SQL)
titanic.show()
# ----------------------------------------------------------
# (equivalent spark) The update command (The U of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The update command (The U of CRUD) ---\n"
)
# Update the data (equivalent to update command in SQL)
titanic = titanic.withColumn(
"Age",
when((col("Sex") == "female") | (col("Age").isNull()), 27).otherwise(
col("Age")
),
)
# Check results update
print("Check results update: ")
titanic.groupBy("Sex").agg(avg("Age")).show()
# ----------------------------------------------------------
# (equivalent spark) The delete command (The D of CRUD)
# ----------------------------------------------------------
print(
"\033[1m" + "\n--- (equivalent spark) The delete command (The D of CRUD) ---\n"
)
# Delete the data (equivalent to delete command in SQL)
titanic = titanic.filter(col("Embarked").isNotNull())
# Check shape after delete
print("Check shape after delete: ")
print((titanic.count(), len(titanic.columns)))
spark_time = time.time() - start_time
return spark_time
# # **3. Comparing SQLite and Spark**
# ## 📢 Comparing SQLite and Spark Performance with **small data volume** (**we will perform 3x the same operations**)
original_data = "/kaggle/input/titanic/train.csv"
sqlite_time_first_try = []
spark_time_first_try = []
for i in range(3):
sqlite_time_first_try.append(oparation_SQlite_and_pandas(original_data))
spark_time_first_try.append(operation_spark(original_data))
# ## 📢 Comparing SQLite and Spark Performance with **larger data volume** (**we will perform 3x the same operations**)
# **👁🗨 Obs: So that we can get more data we will confirm the existing data 1000x with function increase_dataframe**
def increase_dataframe(df, n):
return pd.concat([df] * n, ignore_index=True)
# increase dataframe and save
titanic = pd.read_csv(original_data)
titanic_large = increase_dataframe(titanic, 1000) # repeat 1000 times
titanic_large.to_csv("titanic_large.csv", index=False)
titanic_large = pd.read_csv("/kaggle/working/titanic_large.csv")
print("New shape: " + str(titanic_large.shape))
# ## **🏁 Let's comparation again**
titanic_large = "/kaggle/working/titanic_large.csv"
sqlite_time_second_try = []
spark_time_second_try = []
for i in range(3):
sqlite_time_second_try.append(oparation_SQlite_and_pandas(titanic_large))
spark_time_second_try.append(operation_spark(titanic_large))
# # **4. Analysis of results**
# ## ✔️ Checking of results with **small data**
for i in range(3):
print(f"Attempt:" + str(i + 1))
print(f"SQLite processing time: {sqlite_time_first_try[i]} seconds")
print(f"Spark processing time: {spark_time_first_try[i]} seconds\n")
# We can see that with a **small data** **SQLite has a big advantage over Spark**, especially in the **first attempt where spark uses it to store the data that will be used in memory**. We can see that after the **second attempt, spark has a drastic drop in performance**, this is because the data **is already saved in the chache, managing to make its calls faster**, we also observed that in **every attempt SQLite remains stable** with its time rate always very close.
# So in this case with a **small volume of data, the best thing to use would be SQLite**.
# ## ✔️ Checking results with **big data**
for i in range(3):
print(f"Attempt:" + str(i + 1))
print(f"SQLite processing time: {sqlite_time_second_try[i]} seconds")
print(f"Spark processing time: {spark_time_second_try[i]} seconds\n")
# With a **large volume of data** now we can see how **Spark outperforms SQLite**, having a **better performance from the second try while SQLite remains stable over time**.
# So, in cases with **big data, Spark performs more than 2x better than SQLite**.
# # **5. Conclusion**
# Create traces
trace1 = go.Box(
y=sqlite_time_first_try, name="SQLite First Try", marker_color="#3D9970"
)
trace2 = go.Box(y=spark_time_first_try, name="Spark First Try", marker_color="#FF4136")
trace3 = go.Box(
y=sqlite_time_second_try, name="SQLite Second Try", marker_color="#FF851B"
)
trace4 = go.Box(
y=spark_time_second_try, name="Spark Second Try", marker_color="#0074D9"
)
data = [trace1, trace2, trace3, trace4]
layout = go.Layout(
title="Performance Comparison Between SQLite and Spark",
yaxis=dict(title="Time (seconds)"),
boxmode="group", # group together boxes of the different traces for each value of x
)
fig = go.Figure(data=data, layout=layout)
fig.show()
| false | 0 | 5,341 | 5 | 5,341 | 5,341 |
||
129795974
|
<jupyter_start><jupyter_text>AMEX_data_sampled
Kaggle dataset identifier: amex-data-sampled
<jupyter_script># ### 🔥 제4강 머신러닝을 활용한 신용평가 모델 개발
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# > ### 위 코드는 kaggle 노트북에서 제공되는 코드로, 데이터 파일의 경로를 보여줍니다.
from IPython.core.display import HTML
with open("./CSS.css", "r") as file:
custom_css = file.read()
HTML(custom_css)
# > #### 주피터 노트북 스타일을 변경해주었습니다.
import warnings
warnings.filterwarnings("ignore", module="sklearn.metrics.cluster")
# > #### Pickle 파일을 읽어옵니다. 위에서 프린트된 경로를 입력해주어야 합니다.
df = pd.read_pickle("/kaggle/input/amex-data-sampled/train_df_sample.pkl")
# > #### Index를 설정해주고, head() 를 통해 데이터를 살펴봅니다.
# ### 🧱 Feature Engineering
df = df.reset_index()
df.head()
# > #### 이 데이터는 Amex prediction default 데이터를 한 명의 고객 단위로 피벗(pivot)하고 10만 개를 샘플링한 데이터입니다.
# > #### Hash된 ID값이 보입니다. 이렇게 긴 문자열은 데이터 사이즈가 클 경우 메모리 사용량에 영향을 주게 됩니다. 필요한 만큼만 인코딩해서 사용하도록 합니다. 이번 노트북에서 ID 열이 별도로 필요하진 않지만, Hash된 값을 인코딩하는 방법을 함수로 구현해 적용해보겠습니다.
import hashlib
def encode_customer_id(id_str):
encoded_id = hashlib.sha256(id_str.encode("utf-8")).hexdigest()[:16]
return encoded_id
df["customer_ID"] = df["customer_ID"].apply(encode_customer_id)
def drop_null_cols(df, threshold=0.8):
"""
데이터프레임에서 결측치 비율이 threshold 이상인 변수를 제거하는 함수
"""
null_percent = df.isnull().mean()
drop_cols = list(null_percent[null_percent >= threshold].index)
df = df.drop(drop_cols, axis=1)
print(f"Dropped {len(drop_cols)} columns: {', '.join(drop_cols)}")
return df
# > #### 먼저 결측치 비율이 높은 변수를 제거해보겠습니다. 우리는 먼저 결측치에 대한 탐색작업이 끝났다고 가정합니다. 즉, 해당 데이터의 결측치에는 특별한 의미가 없다고 가정합니다. 하지만 실제 모델링 과정에서 해당 결측치가 발생한 원인을 파악하는 것은 매우 중요한 작업입니다.
# > #### 결측치 제거를 위해 별도 함수를 만들어도 되고, 나중에 변수 선택 (feature selection) 부분에서 추가하거나 오픈소스 라이브러리를 사용해도 됩니다.
# > #### 코드를 간결하게 하기 위해 함수(function)화 하는 작업은 매우 중요합니다.
df = drop_null_cols(df)
cat_features = [
"B_30",
"B_38",
"D_114",
"D_116",
"D_117",
"D_120",
"D_126",
"D_63",
"D_64",
"D_68",
]
cat_features = [f"{cf}_last" for cf in cat_features]
# > #### 범주형(categorical) 변수에 대한 정보는 원천 데이터(raw data) 설명에 나와있습니다.
import random
num_cols = df.select_dtypes(include=np.number).columns.tolist()
num_cols = [col for col in num_cols if "target" not in col and col not in cat_features]
num_cols_sample = random.sample([col for col in num_cols if "target" not in col], 100)
# > #### 우리는 범주형 변수와 100개의 수치형 변수를 임의로 선택하도록 하겠습니다. (Kaggle에서 제공하는 노트북 성능의 한계로 인해 너무 많은 변수는 다루기 어렵습니다)
feature_list = num_cols_sample + cat_features
all_list = feature_list + ["target"]
df = df[all_list]
def summary(df):
print(f"data shape: {df.shape}")
summ = pd.DataFrame(df.dtypes, columns=["data type"])
summ["#missing"] = df.isnull().sum().values
summ["%missing"] = df.isnull().sum().values / len(df) * 100
summ["#unique"] = df.nunique().values
desc = pd.DataFrame(df.describe(include="all").transpose())
summ["min"] = desc["min"].values
summ["max"] = desc["max"].values
summ["first value"] = df.loc[0].values
summ["second value"] = df.loc[1].values
summ["third value"] = df.loc[2].values
return summ
# > #### 위 요약 테이블은 데이터 정보를 한 눈에 볼 수 있도록 해줍니다. 직접 함수를 구현하지 않고, 오픈소스 라이브러리를 사용해도 됩니다.
summary(df)
#
# 💡 Notes:
# * 신속한 EDA 기능을 구현한 오픈 소스 라이브러리는 많습니다.
#
# * pandas-profiling 은 데이터프레임의 요약 통계 정보를 자동으로 생성해주는 라이브러리입니다. 데이터셋의 기본 통계, 상관 관계, 미싱 데이터, 변수 분포 등을 확인할 수 있습니다.
#
# * missingno는 결측 데이터를 시각화하여 데이터셋의 누락된 값을 쉽게 파악할 수 있는 도구입니다.
import gc
gc.collect()
# gc.collect()는 파이썬의 가비지 컬렉터(Garbage Collector)를 명시적으로 호출하여 메모리를 회수하는 역할을 합니다. 가비지 컬렉터는 더 이상 사용되지 않는 객체들을 자동으로 메모리에서 해제하는 파이썬의 기능입니다.
# gc.collect()를 호출하는 이유는 다음과 같습니다:
# 메모리 관리: 파이썬은 동적 메모리 할당 및 해제를 처리하기 때문에, 메모리 관리가 중요합니다. gc.collect()를 사용하여 더 이상 필요하지 않은 객체들을 메모리에서 제거함으로써 메모리 사용량을 최적화할 수 있습니다.
# 메모리 누수 방지: 가비지 컬렉터는 메모리 누수를 방지하기 위해 사용될 수 있습니다. 메모리 누수란 더 이상 사용되지 않는 객체들이 메모리에 남아있어 메모리 사용량이 계속해서 증가하는 현상을 말합니다. gc.collect()를 호출하여 메모리에서 제거되지 않은 객체들을 강제로 회수함으로써 메모리 누수를 방지할 수 있습니다.
# 성능 개선: 가비지 컬렉터는 메모리 회수에 관여하므로, 불필요한 객체들을 빠르게 회수함으로써 프로그램의 전반적인 성능을 향상시킬 수 있습니다.
# gc.collect()는 일반적으로 명시적으로 호출할 필요가 없으며, 파이썬 인터프리터가 자동으로 가비지 컬렉션을 수행합니다. 하지만 특정 시점에서 메모리를 즉시 회수하고자 할 때 사용될 수 있습니다. 주로 대규모 데이터 처리나 장기간 실행되는 프로그램에서 메모리 사용을 최적화하기 위해 사용됩니다.
df[cat_features].head()
df[cat_features].dtypes
for categorical_feature in cat_features:
if df[categorical_feature].dtype == "float16":
df[categorical_feature] = df[categorical_feature].astype(str)
if df[categorical_feature].dtype == "category":
df[categorical_feature] = df[categorical_feature].astype(str)
elif df[categorical_feature].dtype == "object":
df[categorical_feature] = df[categorical_feature].astype(str)
# > #### 범주형 변수의 변수 타입을 통일해 줍니다.
df[cat_features].dtypes
# > #### 다음은 결측치를 처리하는 전처리 과정입니다. 결측치에 대한 처리 전략은 더 자세한 데이터 분석을 통해 결정되어야 합니다. 여기서는 가장 간단한 방법을 사용하지만, 실무에서는 다양한 탐색을 토대로 처리 전략을 결정해야 합니다.
from sklearn.preprocessing import LabelEncoder
le_encoder = LabelEncoder()
for categorical_feature in cat_features:
df[categorical_feature].fillna(value="NaN", inplace=True)
df[categorical_feature] = le_encoder.fit_transform(df[categorical_feature])
from sklearn.impute import SimpleImputer
def impute_nan(df, num_cols, strategy="mean"):
"""
NaN 값을 strategy에 따라 num_cols에 대해 impute하는 함수
:param df: DataFrame
:param num_cols: list, imputation 대상 numeric column 리스트
:param strategy: str, imputation 전략 (default: 'mean')
:return: DataFrame, imputed DataFrame
"""
imputer = SimpleImputer(strategy=strategy)
df[num_cols] = imputer.fit_transform(df[num_cols])
return df
df = impute_nan(df, num_cols_sample, strategy="mean")
df.head()
# > #### 간단한 시각화를 진행해보겠습니다. 사실 순서는 상관이 없습니다. 오히려 시각화를 통한 데이터 파악 후 데이터에 대한 처리를 진행해주는 것도 좋은 방법입니다. 다만, 시각화 코드 적용시 데이터 타입이 맞지 않거나 결측치가 있다면 잘못된 시각화 정보를 제공할 수 도 있습니다.
import plotly.express as px
fig2 = px.pie(
df,
names="target",
height=400,
width=600,
hole=0.7,
title="target class Overview",
color_discrete_sequence=["#4c78a8", "#72b7b2"],
)
fig2.update_traces(
hovertemplate=None, textposition="outside", textinfo="percent+label", rotation=0
)
fig2.update_layout(
margin=dict(t=100, b=30, l=0, r=0),
showlegend=False,
plot_bgcolor="#fafafa",
paper_bgcolor="#fafafa",
title_font=dict(size=20, color="#555", family="Lato, sans-serif"),
font=dict(size=17, color="#8a8d93"),
hoverlabel=dict(bgcolor="#444", font_size=13, font_family="Lato, sans-serif"),
)
fig2.show()
import seaborn as sns
import matplotlib.pyplot as plt
import math
sns.set(style="whitegrid")
fig, axs = plt.subplots(math.ceil(len(cat_features) / 2), 2, figsize=(20, 30))
for i, feature in enumerate(cat_features):
row = i // 2
col = i % 2
sns.countplot(x=feature, data=df, ax=axs[row, col])
axs[row, col].set_xlabel(feature)
axs[row, col].set_ylabel("Count")
sns.despine()
plt.show()
# > #### **Q. 각 피처의 target 분포를 보고싶다면 코드를 어떻게 수정해야할까요?**
exp_cols = random.sample(num_cols_sample, k=4)
# num_data = train_data.select_dtypes(exclude=['object']).copy()
plt.figure(figsize=(14, 10))
for idx, column in enumerate(exp_cols):
plt.subplot(3, 2, idx + 1)
sns.histplot(x=column, hue="target", data=df, bins=30, kde=True, palette="YlOrRd")
plt.title(f"{column} Distribution")
plt.ylim(0, 100)
plt.tight_layout()
# > #### 4개의 변수만 선택해 시각화 해보았습니다. 각 변수가 타깃(0 or 1)에 따라 어떻게 분포하고 있는지 살펴볼 수 있습니다. 각 분포가 상이할 수록 예측력(predictive power)이 높을 것이라 기대할 수 있습니다.
# > #### 📊 WOE와 IV값을 계산하는 함수 구현
def calculate_woe_iv(df, feature_list, cat_features, target):
result_df = pd.DataFrame(columns=["Feature", "Bins", "WOE", "IV", "IV_sum"])
selected_features = [] # 선택된 피처들을 저장할 리스트
bin_edges_dict = {} # 피처별 bin 경계값을 저장할 딕셔너리
woe_dict = {} # 피처별 WOE 값을 저장할 딕셔너리
for feature in feature_list:
if feature in cat_features: # 피처가 범주형일 경우
df_temp = df.copy()
df_temp[feature + "_bins"] = df_temp[feature] # 범주형 변수의 고유값들을 bin으로 사용
bin_edges_dict[feature] = sorted(
df[feature].unique()
) # 피처의 고유값들을 bin 경계값으로 저장
else: # 피처가 연속형일 경우
df_temp = df.copy()
df_temp[feature + "_bins"], bin_edges = pd.qcut(
df_temp[feature], 10, duplicates="drop", retbins=True
)
bin_edges_dict[feature] = bin_edges # 피처를 10개의 bin으로 분할하고 bin 경계값을 저장
# 피처의 각 bin에서 이벤트와 비이벤트의 개수를 계산합니다.
grouped_data = (
df_temp.groupby(feature + "_bins")[target]
.agg(
[
("non_event", lambda x: sum(1 - x)), # 비이벤트(0)의 개수를 합산
("event", lambda x: sum(x)), # 이벤트(1)의 개수를 합산
]
)
.reset_index()
)
# 비이벤트와 이벤트의 비율을 계산합니다.
grouped_data["non_event_prop"] = grouped_data["non_event"] / sum(
grouped_data["non_event"]
)
grouped_data["event_prop"] = grouped_data["event"] / sum(grouped_data["event"])
# WOE(Weight of Evidence)를 계산합니다.
grouped_data["WOE"] = np.where(
grouped_data["event_prop"] == 0,
0,
np.log(grouped_data["non_event_prop"] / grouped_data["event_prop"]),
)
# Information Value(IV)를 계산합니다.
grouped_data["IV"] = (
grouped_data["non_event_prop"] - grouped_data["event_prop"]
) * grouped_data["WOE"]
iv_sum = sum(grouped_data["IV"])
if iv_sum >= 0.02: # IV 합이 0.02 이상인 경우 피처를 선택
selected_features.append(feature)
result = pd.DataFrame()
result["Feature"] = [feature] * len(grouped_data)
result["Bins"] = grouped_data[feature + "_bins"]
result["WOE"] = grouped_data["WOE"]
result["IV"] = grouped_data["IV"]
result["IV_sum"] = [iv_sum] * len(grouped_data)
result_df = pd.concat([result_df, result])
woe_dict[feature] = grouped_data.set_index(feature + "_bins")["WOE"].to_dict()
# 선택된 피처들의 개수와 목록을 출력합니다.
print("전체 피처 개수:", len(feature_list))
print("선택된 피처 개수:", len(selected_features))
print("선택된 피처:", selected_features)
return result_df, selected_features, bin_edges_dict, woe_dict
# 이 함수는 주어진 데이터프레임(df)와 피처 리스트(feature_list), 범주형 피처 리스트(cat_features), 그리고 타겟 변수(target)를 기반으로 Weight of Evidence(WOE)와 Information Value(IV)를 계산합니다.
# 설명을 상세히 드리자면:
# 1. 피처가 범주형일 경우, 해당 피처의 고유값들을 bin으로 사용합니다.
# 2. 피처가 연속형일 경우, 10개의 bin으로 분할하고 bin 경계값을 저장합니다.
# 3. 각 bin에서 이벤트와 비이벤트(이벤트가 발생하지 않은 경우)의 개수를 계산합니다.
# 4. 비이벤트와 이벤트의 비율을 계산합니다.
# 5. Weight of Evidence(WOE)를 계산합니다. WOE는 비이벤트 비율과 이벤트 비율의 로그 비율로 정의됩니다.
# 6. Information Value(IV)를 계산합니다. IV는 비이벤트 비율과 이벤트 비율의 차이에 WOE를 곱한 값입니다.
# 7. IV 합이 0.02 이상인 경우, 피처를 선택하고 선택된 피처 목록에 추가합니다.
# 8. 결과를 저장할 데이터프레임(result_df)에 계산된 WOE, IV 값을 추가합니다.
# 9. WOE 값을 피처별로 저장하는 딕셔너리(woe_dict)에도 계산된 값을 저장합니다.
# 10. 선택된 피처들의 개수와 목록을 출력합니다.
# 함수는 result_df(결과 데이터프레임), selected_features(선택된 피처 목록), bin_edges_dict(피처별 bin 경계값 딕셔너리), woe_dict(피처별 WOE 값 딕셔너리)를 반환합니다.
#
result_df, selected_features, bin_edges_dict, woe_dict = calculate_woe_iv(
df, feature_list, cat_features, "target"
)
def transform_to_woe(
df, selected_features, cat_features, bin_edges_dict, woe_dict, target
):
df_woe = df[selected_features + [target]].copy()
for feature in selected_features:
if feature in cat_features:
df_woe[feature] = df_woe[feature].map(woe_dict[feature])
else:
feature_bins = pd.cut(
df_woe[feature], bins=bin_edges_dict[feature], include_lowest=True
)
df_woe[feature] = feature_bins.map(woe_dict[feature])
return df_woe
def transform_to_woe(
df, selected_features, cat_features, bin_edges_dict, woe_dict, target
):
df_woe = df[selected_features + [target]].copy()
for feature in selected_features:
if feature in cat_features:
# 범주형 피처의 경우, 해당 피처의 WOE 값을 매핑합니다.
df_woe[feature] = df_woe[feature].map(woe_dict[feature])
else:
# 연속형 피처의 경우, bin 경계값에 따라 WOE 값을 할당합니다.
feature_bins = pd.cut(
df_woe[feature], bins=bin_edges_dict[feature], include_lowest=True
)
df_woe[feature] = feature_bins.map(woe_dict[feature])
return df_woe
df_woe = transform_to_woe(
df, selected_features, cat_features, bin_edges_dict, woe_dict, "target"
)
df_woe.head()
# ### 🧱 Modeling
from sklearn.model_selection import StratifiedKFold
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score
def xgboost_model(df_woe, target, folds=5, seed=2023):
xgb_models = [] # XGBoost 모델들을 저장할 리스트
xgb_oof = [] # out-of-fold 예측 결과를 저장할 리스트
predictions = np.zeros(len(df_woe)) # 전체 데이터셋에 대한 예측 결과를 저장할 배열
f_imp = [] # 특성 중요도를 저장할 리스트
X = df_woe.drop(columns=[target]) # 독립 변수 데이터
y = df_woe[target] # 종속 변수 데이터
skf = StratifiedKFold(n_splits=folds, shuffle=True, random_state=seed)
for fold, (train_idx, val_idx) in enumerate(skf.split(X, y)):
print(f'{"#"*24} Training FOLD {fold+1} {"#"*24}')
X_train, y_train = X.iloc[train_idx], y.iloc[train_idx]
X_valid, y_valid = X.iloc[val_idx], y.iloc[val_idx]
watchlist = [(X_train, y_train), (X_valid, y_valid)]
model = XGBClassifier(
n_estimators=1000, n_jobs=-1, max_depth=4, eta=0.2, colsample_bytree=0.67
)
model.fit(
X_train, y_train, eval_set=watchlist, early_stopping_rounds=300, verbose=0
)
val_preds = model.predict_proba(X_valid)[:, 1] # 검증 세트에 대한 예측 확률
val_score = roc_auc_score(y_valid, val_preds) # 검증 세트의 ROC AUC 점수
best_iter = model.best_iteration
idx_pred_target = np.vstack(
[val_idx, val_preds, y_valid]
).T # 검증 세트 인덱스, 예측 확률, 실제 타겟 값으로 구성된 배열
f_imp.append(
{i: j for i, j in zip(X_train.columns, model.feature_importances_)}
) # 특성 중요도 저장
print(f'{" "*20} auc:{val_score:.5f} {" "*6} best iteration:{best_iter}')
xgb_oof.append(idx_pred_target) # out-of-fold 예측 결과 추가
xgb_models.append(model) # 학습된 모델 추가
if val_score > 0.917:
predictions += model.predict_proba(X)[:, 1] # 특정 조건을 만족하는 모델의 예측 확률을 누적
predictions /= folds # folds 수로 나눠 평균 예측 확률 계산
mean_val_auc = np.mean(
[roc_auc_score(oof[:, 2], oof[:, 1]) for oof in xgb_oof]
) # 평균 out-of-fold ROC AUC 점수 계산
print("*" * 45)
print(f"Mean AUC: {mean_val_auc:.5f}")
return xgb_models, xgb_oof, predictions, f_imp
# xgb_models: XGBoost 모델들을 저장하는 리스트입니다.
# xgb_oof: out-of-fold 예측 결과를 저장하는 리스트입니다. 각각의 요소는 검증 세트의 인덱스, 예측 확률, 실제 타겟 값으로 구성된 배열입니다.
# predictions: 전체 데이터셋에 대한 예측 결과를 저장하는 배열입니다. 초기값은 모두 0으로 초기화됩니다.
# f_imp: 특성 중요도를 저장하는 리스트입니다. 각 요소는 특성과 해당 특성의 중요도를 딕셔너리 형태로 저장합니다.
# X: 독립 변수 데이터입니다.
# y: 종속 변수 데이터입니다.
# skf: Stratified K-Fold를 사용하여 데이터를 분할하는 객체입니다.
# fold: 현재의 fold 번호입니다.
# train_idx: 현재 fold에서의 학습 데이터 인덱스입니다.
# val_idx: 현재 fold에서의 검증 데이터 인덱스입니다.
# X_train, y_train: 현재 fold에서의 학습 데이터입니다.
# X_valid, y_valid: 현재 fold에서의 검증 데이터입니다.
# watchlist: XGBoost 모델의 학습을 위한 데이터셋을 지정합니다.
# model: XGBoost 분류 모델입니다. 주어진 하이퍼파라미터로 초기화되고 학습됩니다.
# val_preds: 검증 세트에 대한 예측 확률입니다.
# val_score: 검증 세트의 ROC AUC 점수입니다.
# best_iter: early stopping에 의해 선택된 최적의 반복 횟수입니다.
# idx_pred_target: 검증 세트의 인덱스, 예측 확률, 실제 타겟 값으로 구성된 배열입니다.
# mean_val_auc: 모든 fold의 out-of-fold 예측 결과를 평균한 ROC AUC 점수입니다.
df_woe.dtypes
# > #### XGB모델을 적합할 수 있도록 category변수를 변환해줍니다.
def convert_category_to_numeric(df):
for col in df.columns:
if str(df[col].dtype) == "category":
df[col] = df[col].astype("int")
return df
df_woe = convert_category_to_numeric(df_woe)
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="xgboost")
xgb_models, xgb_oof, predictions, f_imp = xgboost_model(
df_woe, "target", folds=5, seed=2023
)
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc, confusion_matrix
import seaborn as sns
plt.figure(figsize=(14, 6))
# Plot ROC curves
plt.subplot(1, 2, 1)
for oof in xgb_oof:
fpr, tpr, _ = roc_curve(oof[:, 2], oof[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label=f"ROC curve (area = {roc_auc:.2f})")
plt.plot([0, 1], [0, 1], color="navy", linestyle="--")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic")
plt.legend(loc="lower right")
# Compute and plot confusion matrix
plt.subplot(1, 2, 2)
# 혼동 행렬(Confusion Matrix)을 계산하고 그립니다.
# 확률 대신 예측 클래스를 사용해야 합니다.
# 예측 임계값으로 0.5를 사용합니다.
predictions_class = [1 if pred > 0.5 else 0 for pred in predictions]
cm = confusion_matrix(df_woe["target"], predictions_class)
sns.heatmap(cm, annot=True, fmt="d")
plt.title("Confusion Matrix")
plt.xlabel("Predicted")
plt.ylabel("True")
plt.tight_layout()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# Define the base score
base_score = 650
# Define the PDO (Point of double)
PDO = 20
# Calculate the factor and offset
factor = PDO / np.log(2)
offset = base_score - (factor * np.log(20))
# Define a function to calculate the score
def calculate_score(probability, factor, offset):
odds = probability / (1 - probability)
score = offset + (factor * np.log(odds))
return np.clip(score, 250, 1000) # Clip scores between 250 and 1000
# Calculate the scores
scores = calculate_score(1 - predictions, factor, offset)
# Round the scores to the nearest integer
scores = np.round(scores)
# Plot the score distribution
plt.hist(scores, bins=20)
plt.title("Credit Score Distribution")
plt.xlabel("Score")
plt.ylabel("Frequency")
plt.show()
# Add the predictions and scores to the dataframe
df_woe["probability"] = predictions
df_woe["credit_score"] = scores
# Select 5 random samples
samples = df_woe.sample(5, random_state=42)
# Display the samples
samples
# ### 🧱 Inference
def predict_and_score(model, instance, factor, offset):
# Reshape the instance if it's a series/single row
if len(instance.shape) == 1:
instance = instance.values.reshape(1, -1)
# Make prediction
probability = model.predict_proba(instance)[:, 1]
# Calculate score
score = calculate_score(1 - probability, factor, offset)
# Round the score to the nearest integer
score = np.round(score)
return score[0]
inference = df_woe.drop(["target", "probability", "credit_score"], axis=1)
inference.sample(1)
# Extract one sample from the data
sample = inference.sample(1)
# Call the function with the first trained model and the sample data
score = predict_and_score(xgb_models[0], sample, factor, offset)
print("해당 고객의 신용 점수는 다음과 같습니다: ", score)
# ### 🧱 문제: KS값을 계산하고 싶다면 어떻게 해야할까? 아래 예시 코드를 참조해 만들어 보시오
# def calculate_ks(actual, predicted):
# pairs = zip(actual, predicted)
# sorted_pairs = sorted(pairs, key=lambda x: x[1], reverse=True)
# num_positive = sum(1 for actual, _ in sorted_pairs if actual == 1)
# cum_positive_ratio = 0
# max_ks = 0
# max_threshold = 0
# for i, (actual, predicted) in enumerate(sorted_pairs):
# if actual == 1:
# cum_positive_ratio += 1 / num_positive
# else:
# cum_negative_ratio = (i + 1 - cum_positive_ratio) / (len(sorted_pairs) - num_positive)
# current_ks = abs(cum_positive_ratio - cum_negative_ratio)
# if current_ks > max_ks:
# max_ks = current_ks
# max_threshold = predicted
# return max_ks, max_threshold
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/795/129795974.ipynb
|
amex-data-sampled
|
kimtaehun
|
[{"Id": 129795974, "ScriptId": 38599529, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13683894, "CreationDate": "05/16/2023 14:18:02", "VersionNumber": 1.0, "Title": "4\uac15)AMEX \ub370\uc774\ud130\ub97c \uc0ac\uc6a9\ud55c \uc2e0\uc6a9\ud3c9\uac00 \ubaa8\ub378 \ub9cc\ub4e4\uae30", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 611.0, "LinesInsertedFromPrevious": 611.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 4}]
|
[{"Id": 186165241, "KernelVersionId": 129795974, "SourceDatasetVersionId": 5688399}]
|
[{"Id": 5688399, "DatasetId": 3270398, "DatasourceVersionId": 5763995, "CreatorUserId": 1885842, "LicenseName": "Unknown", "CreationDate": "05/15/2023 07:57:57", "VersionNumber": 1.0, "Title": "AMEX_data_sampled", "Slug": "amex-data-sampled", "Subtitle": "This is a small-sized sampled dataset from AMEX dafault prediction dataset", "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3270398, "CreatorUserId": 1885842, "OwnerUserId": 1885842.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5688399.0, "CurrentDatasourceVersionId": 5763995.0, "ForumId": 3336031, "Type": 2, "CreationDate": "05/15/2023 07:57:57", "LastActivityDate": "05/15/2023", "TotalViews": 127, "TotalDownloads": 3, "TotalVotes": 8, "TotalKernels": 3}]
|
[{"Id": 1885842, "UserName": "kimtaehun", "DisplayName": "DataManyo", "RegisterDate": "05/05/2018", "PerformanceTier": 4}]
|
# ### 🔥 제4강 머신러닝을 활용한 신용평가 모델 개발
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# > ### 위 코드는 kaggle 노트북에서 제공되는 코드로, 데이터 파일의 경로를 보여줍니다.
from IPython.core.display import HTML
with open("./CSS.css", "r") as file:
custom_css = file.read()
HTML(custom_css)
# > #### 주피터 노트북 스타일을 변경해주었습니다.
import warnings
warnings.filterwarnings("ignore", module="sklearn.metrics.cluster")
# > #### Pickle 파일을 읽어옵니다. 위에서 프린트된 경로를 입력해주어야 합니다.
df = pd.read_pickle("/kaggle/input/amex-data-sampled/train_df_sample.pkl")
# > #### Index를 설정해주고, head() 를 통해 데이터를 살펴봅니다.
# ### 🧱 Feature Engineering
df = df.reset_index()
df.head()
# > #### 이 데이터는 Amex prediction default 데이터를 한 명의 고객 단위로 피벗(pivot)하고 10만 개를 샘플링한 데이터입니다.
# > #### Hash된 ID값이 보입니다. 이렇게 긴 문자열은 데이터 사이즈가 클 경우 메모리 사용량에 영향을 주게 됩니다. 필요한 만큼만 인코딩해서 사용하도록 합니다. 이번 노트북에서 ID 열이 별도로 필요하진 않지만, Hash된 값을 인코딩하는 방법을 함수로 구현해 적용해보겠습니다.
import hashlib
def encode_customer_id(id_str):
encoded_id = hashlib.sha256(id_str.encode("utf-8")).hexdigest()[:16]
return encoded_id
df["customer_ID"] = df["customer_ID"].apply(encode_customer_id)
def drop_null_cols(df, threshold=0.8):
"""
데이터프레임에서 결측치 비율이 threshold 이상인 변수를 제거하는 함수
"""
null_percent = df.isnull().mean()
drop_cols = list(null_percent[null_percent >= threshold].index)
df = df.drop(drop_cols, axis=1)
print(f"Dropped {len(drop_cols)} columns: {', '.join(drop_cols)}")
return df
# > #### 먼저 결측치 비율이 높은 변수를 제거해보겠습니다. 우리는 먼저 결측치에 대한 탐색작업이 끝났다고 가정합니다. 즉, 해당 데이터의 결측치에는 특별한 의미가 없다고 가정합니다. 하지만 실제 모델링 과정에서 해당 결측치가 발생한 원인을 파악하는 것은 매우 중요한 작업입니다.
# > #### 결측치 제거를 위해 별도 함수를 만들어도 되고, 나중에 변수 선택 (feature selection) 부분에서 추가하거나 오픈소스 라이브러리를 사용해도 됩니다.
# > #### 코드를 간결하게 하기 위해 함수(function)화 하는 작업은 매우 중요합니다.
df = drop_null_cols(df)
cat_features = [
"B_30",
"B_38",
"D_114",
"D_116",
"D_117",
"D_120",
"D_126",
"D_63",
"D_64",
"D_68",
]
cat_features = [f"{cf}_last" for cf in cat_features]
# > #### 범주형(categorical) 변수에 대한 정보는 원천 데이터(raw data) 설명에 나와있습니다.
import random
num_cols = df.select_dtypes(include=np.number).columns.tolist()
num_cols = [col for col in num_cols if "target" not in col and col not in cat_features]
num_cols_sample = random.sample([col for col in num_cols if "target" not in col], 100)
# > #### 우리는 범주형 변수와 100개의 수치형 변수를 임의로 선택하도록 하겠습니다. (Kaggle에서 제공하는 노트북 성능의 한계로 인해 너무 많은 변수는 다루기 어렵습니다)
feature_list = num_cols_sample + cat_features
all_list = feature_list + ["target"]
df = df[all_list]
def summary(df):
print(f"data shape: {df.shape}")
summ = pd.DataFrame(df.dtypes, columns=["data type"])
summ["#missing"] = df.isnull().sum().values
summ["%missing"] = df.isnull().sum().values / len(df) * 100
summ["#unique"] = df.nunique().values
desc = pd.DataFrame(df.describe(include="all").transpose())
summ["min"] = desc["min"].values
summ["max"] = desc["max"].values
summ["first value"] = df.loc[0].values
summ["second value"] = df.loc[1].values
summ["third value"] = df.loc[2].values
return summ
# > #### 위 요약 테이블은 데이터 정보를 한 눈에 볼 수 있도록 해줍니다. 직접 함수를 구현하지 않고, 오픈소스 라이브러리를 사용해도 됩니다.
summary(df)
#
# 💡 Notes:
# * 신속한 EDA 기능을 구현한 오픈 소스 라이브러리는 많습니다.
#
# * pandas-profiling 은 데이터프레임의 요약 통계 정보를 자동으로 생성해주는 라이브러리입니다. 데이터셋의 기본 통계, 상관 관계, 미싱 데이터, 변수 분포 등을 확인할 수 있습니다.
#
# * missingno는 결측 데이터를 시각화하여 데이터셋의 누락된 값을 쉽게 파악할 수 있는 도구입니다.
import gc
gc.collect()
# gc.collect()는 파이썬의 가비지 컬렉터(Garbage Collector)를 명시적으로 호출하여 메모리를 회수하는 역할을 합니다. 가비지 컬렉터는 더 이상 사용되지 않는 객체들을 자동으로 메모리에서 해제하는 파이썬의 기능입니다.
# gc.collect()를 호출하는 이유는 다음과 같습니다:
# 메모리 관리: 파이썬은 동적 메모리 할당 및 해제를 처리하기 때문에, 메모리 관리가 중요합니다. gc.collect()를 사용하여 더 이상 필요하지 않은 객체들을 메모리에서 제거함으로써 메모리 사용량을 최적화할 수 있습니다.
# 메모리 누수 방지: 가비지 컬렉터는 메모리 누수를 방지하기 위해 사용될 수 있습니다. 메모리 누수란 더 이상 사용되지 않는 객체들이 메모리에 남아있어 메모리 사용량이 계속해서 증가하는 현상을 말합니다. gc.collect()를 호출하여 메모리에서 제거되지 않은 객체들을 강제로 회수함으로써 메모리 누수를 방지할 수 있습니다.
# 성능 개선: 가비지 컬렉터는 메모리 회수에 관여하므로, 불필요한 객체들을 빠르게 회수함으로써 프로그램의 전반적인 성능을 향상시킬 수 있습니다.
# gc.collect()는 일반적으로 명시적으로 호출할 필요가 없으며, 파이썬 인터프리터가 자동으로 가비지 컬렉션을 수행합니다. 하지만 특정 시점에서 메모리를 즉시 회수하고자 할 때 사용될 수 있습니다. 주로 대규모 데이터 처리나 장기간 실행되는 프로그램에서 메모리 사용을 최적화하기 위해 사용됩니다.
df[cat_features].head()
df[cat_features].dtypes
for categorical_feature in cat_features:
if df[categorical_feature].dtype == "float16":
df[categorical_feature] = df[categorical_feature].astype(str)
if df[categorical_feature].dtype == "category":
df[categorical_feature] = df[categorical_feature].astype(str)
elif df[categorical_feature].dtype == "object":
df[categorical_feature] = df[categorical_feature].astype(str)
# > #### 범주형 변수의 변수 타입을 통일해 줍니다.
df[cat_features].dtypes
# > #### 다음은 결측치를 처리하는 전처리 과정입니다. 결측치에 대한 처리 전략은 더 자세한 데이터 분석을 통해 결정되어야 합니다. 여기서는 가장 간단한 방법을 사용하지만, 실무에서는 다양한 탐색을 토대로 처리 전략을 결정해야 합니다.
from sklearn.preprocessing import LabelEncoder
le_encoder = LabelEncoder()
for categorical_feature in cat_features:
df[categorical_feature].fillna(value="NaN", inplace=True)
df[categorical_feature] = le_encoder.fit_transform(df[categorical_feature])
from sklearn.impute import SimpleImputer
def impute_nan(df, num_cols, strategy="mean"):
"""
NaN 값을 strategy에 따라 num_cols에 대해 impute하는 함수
:param df: DataFrame
:param num_cols: list, imputation 대상 numeric column 리스트
:param strategy: str, imputation 전략 (default: 'mean')
:return: DataFrame, imputed DataFrame
"""
imputer = SimpleImputer(strategy=strategy)
df[num_cols] = imputer.fit_transform(df[num_cols])
return df
df = impute_nan(df, num_cols_sample, strategy="mean")
df.head()
# > #### 간단한 시각화를 진행해보겠습니다. 사실 순서는 상관이 없습니다. 오히려 시각화를 통한 데이터 파악 후 데이터에 대한 처리를 진행해주는 것도 좋은 방법입니다. 다만, 시각화 코드 적용시 데이터 타입이 맞지 않거나 결측치가 있다면 잘못된 시각화 정보를 제공할 수 도 있습니다.
import plotly.express as px
fig2 = px.pie(
df,
names="target",
height=400,
width=600,
hole=0.7,
title="target class Overview",
color_discrete_sequence=["#4c78a8", "#72b7b2"],
)
fig2.update_traces(
hovertemplate=None, textposition="outside", textinfo="percent+label", rotation=0
)
fig2.update_layout(
margin=dict(t=100, b=30, l=0, r=0),
showlegend=False,
plot_bgcolor="#fafafa",
paper_bgcolor="#fafafa",
title_font=dict(size=20, color="#555", family="Lato, sans-serif"),
font=dict(size=17, color="#8a8d93"),
hoverlabel=dict(bgcolor="#444", font_size=13, font_family="Lato, sans-serif"),
)
fig2.show()
import seaborn as sns
import matplotlib.pyplot as plt
import math
sns.set(style="whitegrid")
fig, axs = plt.subplots(math.ceil(len(cat_features) / 2), 2, figsize=(20, 30))
for i, feature in enumerate(cat_features):
row = i // 2
col = i % 2
sns.countplot(x=feature, data=df, ax=axs[row, col])
axs[row, col].set_xlabel(feature)
axs[row, col].set_ylabel("Count")
sns.despine()
plt.show()
# > #### **Q. 각 피처의 target 분포를 보고싶다면 코드를 어떻게 수정해야할까요?**
exp_cols = random.sample(num_cols_sample, k=4)
# num_data = train_data.select_dtypes(exclude=['object']).copy()
plt.figure(figsize=(14, 10))
for idx, column in enumerate(exp_cols):
plt.subplot(3, 2, idx + 1)
sns.histplot(x=column, hue="target", data=df, bins=30, kde=True, palette="YlOrRd")
plt.title(f"{column} Distribution")
plt.ylim(0, 100)
plt.tight_layout()
# > #### 4개의 변수만 선택해 시각화 해보았습니다. 각 변수가 타깃(0 or 1)에 따라 어떻게 분포하고 있는지 살펴볼 수 있습니다. 각 분포가 상이할 수록 예측력(predictive power)이 높을 것이라 기대할 수 있습니다.
# > #### 📊 WOE와 IV값을 계산하는 함수 구현
def calculate_woe_iv(df, feature_list, cat_features, target):
result_df = pd.DataFrame(columns=["Feature", "Bins", "WOE", "IV", "IV_sum"])
selected_features = [] # 선택된 피처들을 저장할 리스트
bin_edges_dict = {} # 피처별 bin 경계값을 저장할 딕셔너리
woe_dict = {} # 피처별 WOE 값을 저장할 딕셔너리
for feature in feature_list:
if feature in cat_features: # 피처가 범주형일 경우
df_temp = df.copy()
df_temp[feature + "_bins"] = df_temp[feature] # 범주형 변수의 고유값들을 bin으로 사용
bin_edges_dict[feature] = sorted(
df[feature].unique()
) # 피처의 고유값들을 bin 경계값으로 저장
else: # 피처가 연속형일 경우
df_temp = df.copy()
df_temp[feature + "_bins"], bin_edges = pd.qcut(
df_temp[feature], 10, duplicates="drop", retbins=True
)
bin_edges_dict[feature] = bin_edges # 피처를 10개의 bin으로 분할하고 bin 경계값을 저장
# 피처의 각 bin에서 이벤트와 비이벤트의 개수를 계산합니다.
grouped_data = (
df_temp.groupby(feature + "_bins")[target]
.agg(
[
("non_event", lambda x: sum(1 - x)), # 비이벤트(0)의 개수를 합산
("event", lambda x: sum(x)), # 이벤트(1)의 개수를 합산
]
)
.reset_index()
)
# 비이벤트와 이벤트의 비율을 계산합니다.
grouped_data["non_event_prop"] = grouped_data["non_event"] / sum(
grouped_data["non_event"]
)
grouped_data["event_prop"] = grouped_data["event"] / sum(grouped_data["event"])
# WOE(Weight of Evidence)를 계산합니다.
grouped_data["WOE"] = np.where(
grouped_data["event_prop"] == 0,
0,
np.log(grouped_data["non_event_prop"] / grouped_data["event_prop"]),
)
# Information Value(IV)를 계산합니다.
grouped_data["IV"] = (
grouped_data["non_event_prop"] - grouped_data["event_prop"]
) * grouped_data["WOE"]
iv_sum = sum(grouped_data["IV"])
if iv_sum >= 0.02: # IV 합이 0.02 이상인 경우 피처를 선택
selected_features.append(feature)
result = pd.DataFrame()
result["Feature"] = [feature] * len(grouped_data)
result["Bins"] = grouped_data[feature + "_bins"]
result["WOE"] = grouped_data["WOE"]
result["IV"] = grouped_data["IV"]
result["IV_sum"] = [iv_sum] * len(grouped_data)
result_df = pd.concat([result_df, result])
woe_dict[feature] = grouped_data.set_index(feature + "_bins")["WOE"].to_dict()
# 선택된 피처들의 개수와 목록을 출력합니다.
print("전체 피처 개수:", len(feature_list))
print("선택된 피처 개수:", len(selected_features))
print("선택된 피처:", selected_features)
return result_df, selected_features, bin_edges_dict, woe_dict
# 이 함수는 주어진 데이터프레임(df)와 피처 리스트(feature_list), 범주형 피처 리스트(cat_features), 그리고 타겟 변수(target)를 기반으로 Weight of Evidence(WOE)와 Information Value(IV)를 계산합니다.
# 설명을 상세히 드리자면:
# 1. 피처가 범주형일 경우, 해당 피처의 고유값들을 bin으로 사용합니다.
# 2. 피처가 연속형일 경우, 10개의 bin으로 분할하고 bin 경계값을 저장합니다.
# 3. 각 bin에서 이벤트와 비이벤트(이벤트가 발생하지 않은 경우)의 개수를 계산합니다.
# 4. 비이벤트와 이벤트의 비율을 계산합니다.
# 5. Weight of Evidence(WOE)를 계산합니다. WOE는 비이벤트 비율과 이벤트 비율의 로그 비율로 정의됩니다.
# 6. Information Value(IV)를 계산합니다. IV는 비이벤트 비율과 이벤트 비율의 차이에 WOE를 곱한 값입니다.
# 7. IV 합이 0.02 이상인 경우, 피처를 선택하고 선택된 피처 목록에 추가합니다.
# 8. 결과를 저장할 데이터프레임(result_df)에 계산된 WOE, IV 값을 추가합니다.
# 9. WOE 값을 피처별로 저장하는 딕셔너리(woe_dict)에도 계산된 값을 저장합니다.
# 10. 선택된 피처들의 개수와 목록을 출력합니다.
# 함수는 result_df(결과 데이터프레임), selected_features(선택된 피처 목록), bin_edges_dict(피처별 bin 경계값 딕셔너리), woe_dict(피처별 WOE 값 딕셔너리)를 반환합니다.
#
result_df, selected_features, bin_edges_dict, woe_dict = calculate_woe_iv(
df, feature_list, cat_features, "target"
)
def transform_to_woe(
df, selected_features, cat_features, bin_edges_dict, woe_dict, target
):
df_woe = df[selected_features + [target]].copy()
for feature in selected_features:
if feature in cat_features:
df_woe[feature] = df_woe[feature].map(woe_dict[feature])
else:
feature_bins = pd.cut(
df_woe[feature], bins=bin_edges_dict[feature], include_lowest=True
)
df_woe[feature] = feature_bins.map(woe_dict[feature])
return df_woe
def transform_to_woe(
df, selected_features, cat_features, bin_edges_dict, woe_dict, target
):
df_woe = df[selected_features + [target]].copy()
for feature in selected_features:
if feature in cat_features:
# 범주형 피처의 경우, 해당 피처의 WOE 값을 매핑합니다.
df_woe[feature] = df_woe[feature].map(woe_dict[feature])
else:
# 연속형 피처의 경우, bin 경계값에 따라 WOE 값을 할당합니다.
feature_bins = pd.cut(
df_woe[feature], bins=bin_edges_dict[feature], include_lowest=True
)
df_woe[feature] = feature_bins.map(woe_dict[feature])
return df_woe
df_woe = transform_to_woe(
df, selected_features, cat_features, bin_edges_dict, woe_dict, "target"
)
df_woe.head()
# ### 🧱 Modeling
from sklearn.model_selection import StratifiedKFold
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score
def xgboost_model(df_woe, target, folds=5, seed=2023):
xgb_models = [] # XGBoost 모델들을 저장할 리스트
xgb_oof = [] # out-of-fold 예측 결과를 저장할 리스트
predictions = np.zeros(len(df_woe)) # 전체 데이터셋에 대한 예측 결과를 저장할 배열
f_imp = [] # 특성 중요도를 저장할 리스트
X = df_woe.drop(columns=[target]) # 독립 변수 데이터
y = df_woe[target] # 종속 변수 데이터
skf = StratifiedKFold(n_splits=folds, shuffle=True, random_state=seed)
for fold, (train_idx, val_idx) in enumerate(skf.split(X, y)):
print(f'{"#"*24} Training FOLD {fold+1} {"#"*24}')
X_train, y_train = X.iloc[train_idx], y.iloc[train_idx]
X_valid, y_valid = X.iloc[val_idx], y.iloc[val_idx]
watchlist = [(X_train, y_train), (X_valid, y_valid)]
model = XGBClassifier(
n_estimators=1000, n_jobs=-1, max_depth=4, eta=0.2, colsample_bytree=0.67
)
model.fit(
X_train, y_train, eval_set=watchlist, early_stopping_rounds=300, verbose=0
)
val_preds = model.predict_proba(X_valid)[:, 1] # 검증 세트에 대한 예측 확률
val_score = roc_auc_score(y_valid, val_preds) # 검증 세트의 ROC AUC 점수
best_iter = model.best_iteration
idx_pred_target = np.vstack(
[val_idx, val_preds, y_valid]
).T # 검증 세트 인덱스, 예측 확률, 실제 타겟 값으로 구성된 배열
f_imp.append(
{i: j for i, j in zip(X_train.columns, model.feature_importances_)}
) # 특성 중요도 저장
print(f'{" "*20} auc:{val_score:.5f} {" "*6} best iteration:{best_iter}')
xgb_oof.append(idx_pred_target) # out-of-fold 예측 결과 추가
xgb_models.append(model) # 학습된 모델 추가
if val_score > 0.917:
predictions += model.predict_proba(X)[:, 1] # 특정 조건을 만족하는 모델의 예측 확률을 누적
predictions /= folds # folds 수로 나눠 평균 예측 확률 계산
mean_val_auc = np.mean(
[roc_auc_score(oof[:, 2], oof[:, 1]) for oof in xgb_oof]
) # 평균 out-of-fold ROC AUC 점수 계산
print("*" * 45)
print(f"Mean AUC: {mean_val_auc:.5f}")
return xgb_models, xgb_oof, predictions, f_imp
# xgb_models: XGBoost 모델들을 저장하는 리스트입니다.
# xgb_oof: out-of-fold 예측 결과를 저장하는 리스트입니다. 각각의 요소는 검증 세트의 인덱스, 예측 확률, 실제 타겟 값으로 구성된 배열입니다.
# predictions: 전체 데이터셋에 대한 예측 결과를 저장하는 배열입니다. 초기값은 모두 0으로 초기화됩니다.
# f_imp: 특성 중요도를 저장하는 리스트입니다. 각 요소는 특성과 해당 특성의 중요도를 딕셔너리 형태로 저장합니다.
# X: 독립 변수 데이터입니다.
# y: 종속 변수 데이터입니다.
# skf: Stratified K-Fold를 사용하여 데이터를 분할하는 객체입니다.
# fold: 현재의 fold 번호입니다.
# train_idx: 현재 fold에서의 학습 데이터 인덱스입니다.
# val_idx: 현재 fold에서의 검증 데이터 인덱스입니다.
# X_train, y_train: 현재 fold에서의 학습 데이터입니다.
# X_valid, y_valid: 현재 fold에서의 검증 데이터입니다.
# watchlist: XGBoost 모델의 학습을 위한 데이터셋을 지정합니다.
# model: XGBoost 분류 모델입니다. 주어진 하이퍼파라미터로 초기화되고 학습됩니다.
# val_preds: 검증 세트에 대한 예측 확률입니다.
# val_score: 검증 세트의 ROC AUC 점수입니다.
# best_iter: early stopping에 의해 선택된 최적의 반복 횟수입니다.
# idx_pred_target: 검증 세트의 인덱스, 예측 확률, 실제 타겟 값으로 구성된 배열입니다.
# mean_val_auc: 모든 fold의 out-of-fold 예측 결과를 평균한 ROC AUC 점수입니다.
df_woe.dtypes
# > #### XGB모델을 적합할 수 있도록 category변수를 변환해줍니다.
def convert_category_to_numeric(df):
for col in df.columns:
if str(df[col].dtype) == "category":
df[col] = df[col].astype("int")
return df
df_woe = convert_category_to_numeric(df_woe)
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="xgboost")
xgb_models, xgb_oof, predictions, f_imp = xgboost_model(
df_woe, "target", folds=5, seed=2023
)
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc, confusion_matrix
import seaborn as sns
plt.figure(figsize=(14, 6))
# Plot ROC curves
plt.subplot(1, 2, 1)
for oof in xgb_oof:
fpr, tpr, _ = roc_curve(oof[:, 2], oof[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label=f"ROC curve (area = {roc_auc:.2f})")
plt.plot([0, 1], [0, 1], color="navy", linestyle="--")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic")
plt.legend(loc="lower right")
# Compute and plot confusion matrix
plt.subplot(1, 2, 2)
# 혼동 행렬(Confusion Matrix)을 계산하고 그립니다.
# 확률 대신 예측 클래스를 사용해야 합니다.
# 예측 임계값으로 0.5를 사용합니다.
predictions_class = [1 if pred > 0.5 else 0 for pred in predictions]
cm = confusion_matrix(df_woe["target"], predictions_class)
sns.heatmap(cm, annot=True, fmt="d")
plt.title("Confusion Matrix")
plt.xlabel("Predicted")
plt.ylabel("True")
plt.tight_layout()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# Define the base score
base_score = 650
# Define the PDO (Point of double)
PDO = 20
# Calculate the factor and offset
factor = PDO / np.log(2)
offset = base_score - (factor * np.log(20))
# Define a function to calculate the score
def calculate_score(probability, factor, offset):
odds = probability / (1 - probability)
score = offset + (factor * np.log(odds))
return np.clip(score, 250, 1000) # Clip scores between 250 and 1000
# Calculate the scores
scores = calculate_score(1 - predictions, factor, offset)
# Round the scores to the nearest integer
scores = np.round(scores)
# Plot the score distribution
plt.hist(scores, bins=20)
plt.title("Credit Score Distribution")
plt.xlabel("Score")
plt.ylabel("Frequency")
plt.show()
# Add the predictions and scores to the dataframe
df_woe["probability"] = predictions
df_woe["credit_score"] = scores
# Select 5 random samples
samples = df_woe.sample(5, random_state=42)
# Display the samples
samples
# ### 🧱 Inference
def predict_and_score(model, instance, factor, offset):
# Reshape the instance if it's a series/single row
if len(instance.shape) == 1:
instance = instance.values.reshape(1, -1)
# Make prediction
probability = model.predict_proba(instance)[:, 1]
# Calculate score
score = calculate_score(1 - probability, factor, offset)
# Round the score to the nearest integer
score = np.round(score)
return score[0]
inference = df_woe.drop(["target", "probability", "credit_score"], axis=1)
inference.sample(1)
# Extract one sample from the data
sample = inference.sample(1)
# Call the function with the first trained model and the sample data
score = predict_and_score(xgb_models[0], sample, factor, offset)
print("해당 고객의 신용 점수는 다음과 같습니다: ", score)
# ### 🧱 문제: KS값을 계산하고 싶다면 어떻게 해야할까? 아래 예시 코드를 참조해 만들어 보시오
# def calculate_ks(actual, predicted):
# pairs = zip(actual, predicted)
# sorted_pairs = sorted(pairs, key=lambda x: x[1], reverse=True)
# num_positive = sum(1 for actual, _ in sorted_pairs if actual == 1)
# cum_positive_ratio = 0
# max_ks = 0
# max_threshold = 0
# for i, (actual, predicted) in enumerate(sorted_pairs):
# if actual == 1:
# cum_positive_ratio += 1 / num_positive
# else:
# cum_negative_ratio = (i + 1 - cum_positive_ratio) / (len(sorted_pairs) - num_positive)
# current_ks = abs(cum_positive_ratio - cum_negative_ratio)
# if current_ks > max_ks:
# max_ks = current_ks
# max_threshold = predicted
# return max_ks, max_threshold
| false | 0 | 8,081 | 4 | 8,106 | 8,081 |
||
129795303
|
# all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
df_train = pd.read_csv("/kaggle/input/titanic/train.csv")
df_test = pd.read_csv("/kaggle/input/titanic/test.csv")
df_submit = pd.read_csv("/kaggle/input/titanic/gender_submission.csv")
# ## **Understanding basic info about data**
# which factors influence survival?
# 1. pclass
# 2. sex
# 3. age
# finding out further using data analysis
df = df_train.copy()
# view data
df.head()
df.tail()
# get basic info on data
df.info()
df.shape
df.columns
# for stat summary
df.describe()
# ## **Data Extraction**
# extract only required columns
df1 = df[
[
"PassengerId",
"Survived",
"Pclass",
"Sex",
"Age",
"SibSp",
"Parch",
"Fare",
"Cabin",
"Embarked",
]
]
# ## **Missing values**
# isnull() detects null values
df1.isnull().sum()
# drop cabin - too many missing vals
df2 = df1.drop("Cabin", axis=1)
# replace nulls in age with mean
age_mean = df2["Age"].mean()
df2["Age"] = df2["Age"].fillna(age_mean)
df2.isnull().sum()
embarked_mode = df2["Embarked"].mode()
df2["Embarked"] = df2["Embarked"].fillna(embarked_mode.iloc[0])
df2.isnull().sum()
# ## **categorical values**
df2.info()
# sex - lets normally deal
df2["Sex"].value_counts()
# df2['Sex'].head()
df3 = df2.replace(["male", "female"], [0, 1])
# df3.head()
df3["Sex"].value_counts()
# Embarked
df3["Embarked"].value_counts()
df4 = df3.replace(["C", "Q", "S"], [0, 1, 2])
df4["Embarked"].value_counts()
df4.info()
# ## **Exploratory Data analysis**
# separate numerical and categorical columns
cat_cols = df2.select_dtypes(include=["object"])
num_cols = df2.select_dtypes(include=np.number)
print(cat_cols.columns)
print(num_cols.columns)
# df4 is preprocessed and ready to use
# ### **Univariate Analysis**
sns.boxplot(x=df2["Age"])
# how are the variables related?
corr = df4.corr()
sns.heatmap(corr, annot=True, cmap=sns.diverging_palette(220, 0, as_cmap=True))
df4.tail(5)
# ### **Histograms**
# A histogram is a graphical representation of the distribution of a dataset. It is a way to visualize the frequency or occurrence of values within different intervals, also known as bins or buckets.
# #### **Age**
# scatter plotjgry
sns.histplot(df4.Age)
# pre dominantly 30-40 year age people
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Age, color="pink")
for c in ax.containers:
ax.bar_label(c)
plt.show()
# #### **Sex**
# sex
sns.histplot(df4.Sex, bins=2)
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Sex, bins=2, color="pink")
for c in ax.containers:
ax.bar_label(c)
plt.show()
# #### **Survived**
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Survived, color="pink", bins=2)
for c in ax.containers:
ax.bar_label(c)
plt.show()
# sex and survival?????
# #### **Ticket class**
# sns.histplot(df4.Pclass,bins = 3)
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Pclass, color="pink", bins=3)
for c in ax.containers:
ax.bar_label(c)
# df4.Pclass.value_counts()
# #### **num of siblings**
df4.SibSp.value_counts()
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.SibSp, bins=7, color="pink")
for c in ax.containers:
ax.bar_label(c)
# predominantly ppl didnot siblings around
# #### **num of parents or children aboard**
df4.Parch.value_counts()
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Parch, bins=6, color="pink")
for c in ax.containers:
ax.bar_label(c)
# mostly ppl didnt have much parent or siblings around
# #### **Fare**
# not a good way but helps
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Fare, bins=20, color="pink")
for c in ax.containers:
ax.bar_label(c)
df4.Fare.min()
df4.Fare.max()
# many ppl onboard - most tickets are low class level
# most of the people didnt buy a ticket - free journey
# df4.loc[df4.Fare < 25].shape
# #### **Embarked**
df4.Embarked.value_counts()
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Embarked, bins=3, color="pink")
for c in ax.containers:
ax.bar_label(c)
# ## **Bivariate Analysis**
df4.columns
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.Fare, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
# color scatter plot by survival
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.Pclass, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.Sex, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.SibSp, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/795/129795303.ipynb
| null | null |
[{"Id": 129795303, "ScriptId": 38482468, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10503867, "CreationDate": "05/16/2023 14:13:25", "VersionNumber": 2.0, "Title": "titanic survival", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 262.0, "LinesInsertedFromPrevious": 168.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 94.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
| null | null | null | null |
# all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
df_train = pd.read_csv("/kaggle/input/titanic/train.csv")
df_test = pd.read_csv("/kaggle/input/titanic/test.csv")
df_submit = pd.read_csv("/kaggle/input/titanic/gender_submission.csv")
# ## **Understanding basic info about data**
# which factors influence survival?
# 1. pclass
# 2. sex
# 3. age
# finding out further using data analysis
df = df_train.copy()
# view data
df.head()
df.tail()
# get basic info on data
df.info()
df.shape
df.columns
# for stat summary
df.describe()
# ## **Data Extraction**
# extract only required columns
df1 = df[
[
"PassengerId",
"Survived",
"Pclass",
"Sex",
"Age",
"SibSp",
"Parch",
"Fare",
"Cabin",
"Embarked",
]
]
# ## **Missing values**
# isnull() detects null values
df1.isnull().sum()
# drop cabin - too many missing vals
df2 = df1.drop("Cabin", axis=1)
# replace nulls in age with mean
age_mean = df2["Age"].mean()
df2["Age"] = df2["Age"].fillna(age_mean)
df2.isnull().sum()
embarked_mode = df2["Embarked"].mode()
df2["Embarked"] = df2["Embarked"].fillna(embarked_mode.iloc[0])
df2.isnull().sum()
# ## **categorical values**
df2.info()
# sex - lets normally deal
df2["Sex"].value_counts()
# df2['Sex'].head()
df3 = df2.replace(["male", "female"], [0, 1])
# df3.head()
df3["Sex"].value_counts()
# Embarked
df3["Embarked"].value_counts()
df4 = df3.replace(["C", "Q", "S"], [0, 1, 2])
df4["Embarked"].value_counts()
df4.info()
# ## **Exploratory Data analysis**
# separate numerical and categorical columns
cat_cols = df2.select_dtypes(include=["object"])
num_cols = df2.select_dtypes(include=np.number)
print(cat_cols.columns)
print(num_cols.columns)
# df4 is preprocessed and ready to use
# ### **Univariate Analysis**
sns.boxplot(x=df2["Age"])
# how are the variables related?
corr = df4.corr()
sns.heatmap(corr, annot=True, cmap=sns.diverging_palette(220, 0, as_cmap=True))
df4.tail(5)
# ### **Histograms**
# A histogram is a graphical representation of the distribution of a dataset. It is a way to visualize the frequency or occurrence of values within different intervals, also known as bins or buckets.
# #### **Age**
# scatter plotjgry
sns.histplot(df4.Age)
# pre dominantly 30-40 year age people
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Age, color="pink")
for c in ax.containers:
ax.bar_label(c)
plt.show()
# #### **Sex**
# sex
sns.histplot(df4.Sex, bins=2)
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Sex, bins=2, color="pink")
for c in ax.containers:
ax.bar_label(c)
plt.show()
# #### **Survived**
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Survived, color="pink", bins=2)
for c in ax.containers:
ax.bar_label(c)
plt.show()
# sex and survival?????
# #### **Ticket class**
# sns.histplot(df4.Pclass,bins = 3)
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Pclass, color="pink", bins=3)
for c in ax.containers:
ax.bar_label(c)
# df4.Pclass.value_counts()
# #### **num of siblings**
df4.SibSp.value_counts()
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.SibSp, bins=7, color="pink")
for c in ax.containers:
ax.bar_label(c)
# predominantly ppl didnot siblings around
# #### **num of parents or children aboard**
df4.Parch.value_counts()
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Parch, bins=6, color="pink")
for c in ax.containers:
ax.bar_label(c)
# mostly ppl didnt have much parent or siblings around
# #### **Fare**
# not a good way but helps
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Fare, bins=20, color="pink")
for c in ax.containers:
ax.bar_label(c)
df4.Fare.min()
df4.Fare.max()
# many ppl onboard - most tickets are low class level
# most of the people didnt buy a ticket - free journey
# df4.loc[df4.Fare < 25].shape
# #### **Embarked**
df4.Embarked.value_counts()
fig, ax = plt.subplots()
counts, edges, bars = ax.hist(df4.Embarked, bins=3, color="pink")
for c in ax.containers:
ax.bar_label(c)
# ## **Bivariate Analysis**
df4.columns
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.Fare, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
# color scatter plot by survival
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.Pclass, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.Sex, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
# age vs fare
fig, ax = plt.subplots()
ax.scatter(df4.Age, df4.SibSp, c=df4.Survived)
ax.legend(["Not Survived", "Survived"])
plt.xlabel("Age")
plt.ylabel("Fare")
| false | 0 | 1,870 | 2 | 1,870 | 1,870 |
||
129795755
|
<jupyter_start><jupyter_text>Top 10000 popular Movies TMDB
This is a collection of metadata about the top 10,000 most popular movies on **The Movie Database (TMDB)** as of May 2023. The dataset includes information such as movie titles, release dates, runtime, genres, production companies, budget, and revenue. This data is collected from TMDB's public [API](https://developer.themoviedb.org/docs).
#### Little bit about [TMDB](https://www.themoviedb.org/)
TMDB (The Movie Database) is a popular online database and community platform that provides a vast collection of information about movies, TV shows, and other related content. TMDB allows users to browse and search for movies and TV shows, view information such as cast, crew, synopsis, and ratings, and also contribute to the community by adding their own reviews, ratings, and other content.
#### Purpose
The dataset is intended for use by data analysts, researchers, and developers who are interested in studying or analyzing the popularity and characteristics of movies. The dataset can be used to perform a wide range of analyses, such as exploring trends in movie genres over time, identifying patterns in movie budgets and revenues, and analyzing the impact of different attributes on a movie's popularity.
####Attributes
- **id**: Unique identifier assigned to each movie in the TMDB database.
- **title**: Title of the movie.
- **release_date**: Date on which the movie was released.
- **genres**: List of genres associated with the movie.
- **original_language**: Language in which the movie was originally produced.
- **vote_average**: Average rating given to the movie by TMDB users.
- **vote_count**: Number of votes cast for the movie on TMDB.
- **popularity**: Popularity score assigned to the movie by TMDB based on user engagement.
- **overview**: Brief description or synopsis of the movie.
- **budget**: Estimated budget for producing the movie in USD.
- **production_companies**: List of production companies involved in making the movie.
- **revenue**: Total revenue generated by the movie in USD.
- **runtime**: Total runtime of the movie in minutes.
- **tagline**: Short, memorable phrase associated with the movie, often used in promotional material.
#### [Dataset Creation](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook)
The dataset mentioned has been created by fetching raw data from TMDB's public API, and then cleaning and preprocessing the data to improve its quality and make it easier to work with. The cleaning process has been done using a notebook available [here](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook), which outlines the steps taken to transform the raw data into a more usable format.
Kaggle dataset identifier: top-10000-popular-movies-tmdb-05-2023
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv(
"/kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv"
)
df
# # top 5 popular movie of the dataset
top_5_popular_movies = df[["title", "popularity"]]
top_5_popular_movies = top_5_popular_movies.sort_values(["popularity"], ascending=False)
top_5_popular_movies.head(5)
# # the average of popularity for each language
most_popularity_lng = df.groupby("original_language")["popularity"].mean()
# most_popularity_lng = df[['original_language',"popularity"]]
most_popularity_lng
# # Top 10 selling movie (higher revenue)
top_10_selling_movie = df[["title", "revenue"]]
top_10_selling_movie = top_10_selling_movie.sort_values(["revenue"], ascending=False)
top_10_selling_movie = top_10_selling_movie.head(10)
# # Verbalize your insights in Markdown cells.
# # Top 10,000 Popular Movies (May 2023) - Insights
# This dataset provides information about the top 10,000 popular movies as of May 2023, sourced from TMDB (The Movie Database). Let's explore some insights from this dataset:
# 1. Movie Genres
# The dataset includes a wide range of movie genres. By analyzing the genre distribution, we can observe the popularity of different genres among the top 10,000 movies. It would be interesting to see which genres are most prevalent and if there are any emerging trends in movie preferences.
# 2. Ratings and Reviews
# The dataset likely contains ratings and reviews for the movies, which can be used to evaluate the overall reception of these films. We can analyze the average ratings and sentiments expressed in the reviews to identify the most well-received movies among the top 10,000.
# 3. Box Office Performance
# Movies that make it to the top 10,000 popular list often have significant box office success. By exploring the dataset, we can gather information on the worldwide and domestic box office earnings for these movies. It would be fascinating to examine the correlation between a film's popularity and its financial performance.
# 4. Movie Directors and Cast
# Identifying the directors and cast members associated with the top 10,000 movies can provide insights into popular trends in the film industry. We can determine if specific directors or actors/actresses are more frequently associated with successful movies and explore any patterns or preferences among the filmmakers and actors involved.
# 5. Release Year Distribution
# Analyzing the distribution of movie release years in the dataset can help us understand if there are any temporal patterns or preferences among the top 10,000 popular movies. We can identify if recent releases dominate the list or if there are notable classics that continue to maintain their popularity over time.
# 6. Movie Runtimes
# Examining the movie runtimes can give us an idea of the preferred duration among the top 10,000 movies. We can analyze the distribution of runtimes and identify any trends or patterns in movie length. This insight could help filmmakers and studios understand audience preferences when it comes to movie duration.
# 7. Language Diversity
# By analyzing the languages of the top 10,000 movies, we can gain insights into the diversity and distribution of films from different regions. It would be interesting to identify which languages are most prevalent and if there are any emerging international cinema trends.
# 8. Production Companies
# Exploring the production companies associated with the top 10,000 movies can reveal patterns in successful collaborations. We can identify if certain production companies are consistently associated with popular movies and analyze any relationships between production companies and film success.
# These insights provide a starting point for exploring the dataset of the top 10,000 popular movies from TMDB in May 2023. By diving deeper into these aspects, we can gain a better understanding of the movie industry's current trends, preferences, and patterns.
import seaborn as sns
import matplotlib.pyplot as plt
top_5_popular_movies = top_5_popular_movies.head(5)
plt.figure()
sns.barplot(x="popularity", y="title", data=top_5_popular_movies, palette="viridis")
plt.title("top 5 popular movie")
plt.xlabel("popularity")
plt.ylabel("title")
plt.show()
plt.figure(figsize=(20, 30))
sns.barplot(
x=most_popularity_lng.values, y=most_popularity_lng.index, data=top_5_popular_movies
)
plt.title("The Average of popularity for each language")
plt.xlabel("Avg")
plt.ylabel("Language")
plt.show()
# top_10_selling_movie
plt.figure(figsize=(20, 10))
sns.barplot(x="revenue", y="title", data=top_10_selling_movie)
# sns.boxenplot(x="title", y="revenue",
# color="b", order=top_10_selling_movie.title,
# scale="linear", data=top_10_selling_movie)
# plt.title('Regional Sales Analysis')
# plt.xlabel('Region')
# plt.ylabel('Genre')
# # 2nd DATASET
df_2 = pd.read_csv("/kaggle/input/mr-beast-youtube-stats-and-subtitles/MrBeast.csv")
df_2
# # top 5 viewed vids
top_5_viewed = (
df_2[["title", "viewCount"]].sort_values(["viewCount"], ascending=False).head(10)
)
top_5_viewed
plt.figure(figsize=(20, 10))
sns.barplot(x="viewCount", y="title", data=top_5_viewed)
# # the relation between viewCount and commentCount
rel_table = df_2[["viewCount", "commentCount"]]
rel_table
plt.figure(figsize=(20, 10))
sns.lineplot(x="viewCount", y="commentCount", data=rel_table, color="blue")
plt.title("the relation between viewCount and commentCount")
plt.xlabel("viewCount")
plt.ylabel("commentCount")
# # longest vids
longest_10 = (
df_2[["title", "duration_seconds"]]
.sort_values(["duration_seconds"], ascending=False)
.head(10)
)
longest_10
plt.figure(figsize=(20, 10))
sns.barplot(x="duration_seconds", y="title", data=longest_10)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/795/129795755.ipynb
|
top-10000-popular-movies-tmdb-05-2023
|
ursmaheshj
|
[{"Id": 129795755, "ScriptId": 38594570, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14984846, "CreationDate": "05/16/2023 14:16:37", "VersionNumber": 1.0, "Title": "Data Visualization", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 138.0, "LinesInsertedFromPrevious": 138.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
|
[{"Id": 186164925, "KernelVersionId": 129795755, "SourceDatasetVersionId": 5643863}, {"Id": 186164924, "KernelVersionId": 129795755, "SourceDatasetVersionId": 5228847}]
|
[{"Id": 5643863, "DatasetId": 3240464, "DatasourceVersionId": 5719190, "CreatorUserId": 7397148, "LicenseName": "CC0: Public Domain", "CreationDate": "05/09/2023 13:43:53", "VersionNumber": 4.0, "Title": "Top 10000 popular Movies TMDB", "Slug": "top-10000-popular-movies-tmdb-05-2023", "Subtitle": "A Comprehensive Collection of Metadata for the Top 10,000 Popular Movies on TMDB", "Description": "This is a collection of metadata about the top 10,000 most popular movies on **The Movie Database (TMDB)** as of May 2023. The dataset includes information such as movie titles, release dates, runtime, genres, production companies, budget, and revenue. This data is collected from TMDB's public [API](https://developer.themoviedb.org/docs). \n\n#### Little bit about [TMDB](https://www.themoviedb.org/)\nTMDB (The Movie Database) is a popular online database and community platform that provides a vast collection of information about movies, TV shows, and other related content. TMDB allows users to browse and search for movies and TV shows, view information such as cast, crew, synopsis, and ratings, and also contribute to the community by adding their own reviews, ratings, and other content.\n\n#### Purpose\nThe dataset is intended for use by data analysts, researchers, and developers who are interested in studying or analyzing the popularity and characteristics of movies. The dataset can be used to perform a wide range of analyses, such as exploring trends in movie genres over time, identifying patterns in movie budgets and revenues, and analyzing the impact of different attributes on a movie's popularity.\n\n####Attributes\n- **id**: Unique identifier assigned to each movie in the TMDB database.\n- **title**: Title of the movie.\n- **release_date**: Date on which the movie was released.\n- **genres**: List of genres associated with the movie.\n- **original_language**: Language in which the movie was originally produced.\n- **vote_average**: Average rating given to the movie by TMDB users.\n- **vote_count**: Number of votes cast for the movie on TMDB.\n- **popularity**: Popularity score assigned to the movie by TMDB based on user engagement.\n- **overview**: Brief description or synopsis of the movie.\n- **budget**: Estimated budget for producing the movie in USD.\n- **production_companies**: List of production companies involved in making the movie.\n- **revenue**: Total revenue generated by the movie in USD.\n- **runtime**: Total runtime of the movie in minutes.\n- **tagline**: Short, memorable phrase associated with the movie, often used in promotional material.\n\n#### [Dataset Creation](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook)\nThe dataset mentioned has been created by fetching raw data from TMDB's public API, and then cleaning and preprocessing the data to improve its quality and make it easier to work with. The cleaning process has been done using a notebook available [here](https://www.kaggle.com/code/ursmaheshj/creating-dataset-using-tmdb-api/notebook), which outlines the steps taken to transform the raw data into a more usable format.", "VersionNotes": "Data Update 2023-05-09", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3240464, "CreatorUserId": 7397148, "OwnerUserId": 7397148.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5921776.0, "CurrentDatasourceVersionId": 5999208.0, "ForumId": 3305699, "Type": 2, "CreationDate": "05/08/2023 19:50:26", "LastActivityDate": "05/08/2023", "TotalViews": 7400, "TotalDownloads": 1454, "TotalVotes": 37, "TotalKernels": 10}]
|
[{"Id": 7397148, "UserName": "ursmaheshj", "DisplayName": "Mahesh Jadhav", "RegisterDate": "05/11/2021", "PerformanceTier": 1}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv(
"/kaggle/input/top-10000-popular-movies-tmdb-05-2023/popular_10000_movies_tmdb.csv"
)
df
# # top 5 popular movie of the dataset
top_5_popular_movies = df[["title", "popularity"]]
top_5_popular_movies = top_5_popular_movies.sort_values(["popularity"], ascending=False)
top_5_popular_movies.head(5)
# # the average of popularity for each language
most_popularity_lng = df.groupby("original_language")["popularity"].mean()
# most_popularity_lng = df[['original_language',"popularity"]]
most_popularity_lng
# # Top 10 selling movie (higher revenue)
top_10_selling_movie = df[["title", "revenue"]]
top_10_selling_movie = top_10_selling_movie.sort_values(["revenue"], ascending=False)
top_10_selling_movie = top_10_selling_movie.head(10)
# # Verbalize your insights in Markdown cells.
# # Top 10,000 Popular Movies (May 2023) - Insights
# This dataset provides information about the top 10,000 popular movies as of May 2023, sourced from TMDB (The Movie Database). Let's explore some insights from this dataset:
# 1. Movie Genres
# The dataset includes a wide range of movie genres. By analyzing the genre distribution, we can observe the popularity of different genres among the top 10,000 movies. It would be interesting to see which genres are most prevalent and if there are any emerging trends in movie preferences.
# 2. Ratings and Reviews
# The dataset likely contains ratings and reviews for the movies, which can be used to evaluate the overall reception of these films. We can analyze the average ratings and sentiments expressed in the reviews to identify the most well-received movies among the top 10,000.
# 3. Box Office Performance
# Movies that make it to the top 10,000 popular list often have significant box office success. By exploring the dataset, we can gather information on the worldwide and domestic box office earnings for these movies. It would be fascinating to examine the correlation between a film's popularity and its financial performance.
# 4. Movie Directors and Cast
# Identifying the directors and cast members associated with the top 10,000 movies can provide insights into popular trends in the film industry. We can determine if specific directors or actors/actresses are more frequently associated with successful movies and explore any patterns or preferences among the filmmakers and actors involved.
# 5. Release Year Distribution
# Analyzing the distribution of movie release years in the dataset can help us understand if there are any temporal patterns or preferences among the top 10,000 popular movies. We can identify if recent releases dominate the list or if there are notable classics that continue to maintain their popularity over time.
# 6. Movie Runtimes
# Examining the movie runtimes can give us an idea of the preferred duration among the top 10,000 movies. We can analyze the distribution of runtimes and identify any trends or patterns in movie length. This insight could help filmmakers and studios understand audience preferences when it comes to movie duration.
# 7. Language Diversity
# By analyzing the languages of the top 10,000 movies, we can gain insights into the diversity and distribution of films from different regions. It would be interesting to identify which languages are most prevalent and if there are any emerging international cinema trends.
# 8. Production Companies
# Exploring the production companies associated with the top 10,000 movies can reveal patterns in successful collaborations. We can identify if certain production companies are consistently associated with popular movies and analyze any relationships between production companies and film success.
# These insights provide a starting point for exploring the dataset of the top 10,000 popular movies from TMDB in May 2023. By diving deeper into these aspects, we can gain a better understanding of the movie industry's current trends, preferences, and patterns.
import seaborn as sns
import matplotlib.pyplot as plt
top_5_popular_movies = top_5_popular_movies.head(5)
plt.figure()
sns.barplot(x="popularity", y="title", data=top_5_popular_movies, palette="viridis")
plt.title("top 5 popular movie")
plt.xlabel("popularity")
plt.ylabel("title")
plt.show()
plt.figure(figsize=(20, 30))
sns.barplot(
x=most_popularity_lng.values, y=most_popularity_lng.index, data=top_5_popular_movies
)
plt.title("The Average of popularity for each language")
plt.xlabel("Avg")
plt.ylabel("Language")
plt.show()
# top_10_selling_movie
plt.figure(figsize=(20, 10))
sns.barplot(x="revenue", y="title", data=top_10_selling_movie)
# sns.boxenplot(x="title", y="revenue",
# color="b", order=top_10_selling_movie.title,
# scale="linear", data=top_10_selling_movie)
# plt.title('Regional Sales Analysis')
# plt.xlabel('Region')
# plt.ylabel('Genre')
# # 2nd DATASET
df_2 = pd.read_csv("/kaggle/input/mr-beast-youtube-stats-and-subtitles/MrBeast.csv")
df_2
# # top 5 viewed vids
top_5_viewed = (
df_2[["title", "viewCount"]].sort_values(["viewCount"], ascending=False).head(10)
)
top_5_viewed
plt.figure(figsize=(20, 10))
sns.barplot(x="viewCount", y="title", data=top_5_viewed)
# # the relation between viewCount and commentCount
rel_table = df_2[["viewCount", "commentCount"]]
rel_table
plt.figure(figsize=(20, 10))
sns.lineplot(x="viewCount", y="commentCount", data=rel_table, color="blue")
plt.title("the relation between viewCount and commentCount")
plt.xlabel("viewCount")
plt.ylabel("commentCount")
# # longest vids
longest_10 = (
df_2[["title", "duration_seconds"]]
.sort_values(["duration_seconds"], ascending=False)
.head(10)
)
longest_10
plt.figure(figsize=(20, 10))
sns.barplot(x="duration_seconds", y="title", data=longest_10)
| false | 2 | 1,867 | 2 | 2,566 | 1,867 |
||
129698027
|
<jupyter_start><jupyter_text>Pistachio Image Dataset
DATASET: https://www.muratkoklu.com/datasets/
CV:https://www.muratkoklu.com/en/publications/
Pistachio Image Dataset
Citation Request :
1. OZKAN IA., KOKLU M. and SARACOGLU R. (2021). Classification of Pistachio Species Using Improved K-NN Classifier. Progress in Nutrition, Vol. 23, N. 2, pp. DOI:10.23751/pn.v23i2.9686. (Open Access) https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/9686/9178
2. SINGH D, TASPINAR YS, KURSUN R, CINAR I, KOKLU M, OZKAN IA, LEE H-N., (2022). Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models, Electronics, 11 (7), 981. https://doi.org/10.3390/electronics11070981. (Open Access)
Article Download (PDF):
1: https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/9686/9178
2: https://doi.org/10.3390/electronics11070981
DATASET: https://www.muratkoklu.com/datasets/
ABSTRACT: In order to keep the economic value of pistachio nuts which have an important place in the agricultural economy, the efficiency of post-harvest industrial processes is very important. To provide this efficiency, new methods and technologies are needed for the separation and classification of pistachios. Different pistachio species address different markets, which increases the need for the classification of pistachio species. In this study, it is aimed to develop a classification model different from traditional separation methods, based on image processing and artificial intelligence which are capable to provide the required classification. A computer vision system has been developed to distinguish two different species of pistachios with different characteristics that address different market types. 2148 sample image for these two kinds of pistachios were taken with a high-resolution camera. The image processing techniques, segmentation and feature extraction were applied on the obtained images of the pistachio samples. A pistachio dataset that has sixteen attributes was created. An advanced classifier based on k-NN method, which is a simple and successful classifier, and principal component analysis was designed on the obtained dataset. In this study; a multi-level system including feature extraction, dimension reduction and dimension weighting stages has been proposed. Experimental results showed that the proposed approach achieved a classification success of 94.18%. The presented high-performance classification model provides an important need for the separation of pistachio species and increases the economic value of species. In addition, the developed model is important in terms of its application to similar studies.
Keywords: Classification, Image processing, k nearest neighbor classifier, Pistachio species
morphological Features (12 Features)
1. Area
2. Perimeter
3. Major_Axis
4. Minor_Axis
5. Eccentricity
6. Eqdiasq
7. Solidity
8. Convex_Area
9. Extent
10. Aspect_Ratio
11. Roundness
12. Compactness
Shape Features (4 Features)
13. Shapefactor_1
14. Shapefactor_2
15. Shapefactor_3
16. Shapefactor_4
Color Features (12 Features)
17. Mean_RR
18. Mean_RG
19. Mean_RB
20. StdDev_RR
21. StdDev_RG
22. StdDev_RB
213. Skew_RR
24. Skew_RG
25. Skew_RB
26. Kurtosis_RR
27. Kurtosis_RG
28. Kurtosis_RB
29. Class
2. SINGH D, TASPINAR YS, KURSUN R, CINAR I, KOKLU M, OZKAN IA, LEE H-N., (2022). Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models, Electronics, 11 (7), 981. https://doi.org/10.3390/electronics11070981. (Open Access)
ABSTRACT: Pistachio is a shelled fruit from the anacardiaceae family. The homeland of pistachio is the Middle East. The Kirmizi pistachios and Siirt pistachios are the major types grown and exported in Turkey. Since the prices, tastes, and nutritional values of these types differs, the type of pistachio becomes important when it comes to trade. This study aims to identify these two types of pistachios, which are frequently grown in Turkey, by classifying them via convolutional neural networks. Within the scope of the study, images of Kirmizi and Siirt pistachio types were obtained through the computer vision system. The pre-trained dataset includes a total of 2148 images, 1232 of Kirmizi type and 916 of Siirt type. Three different convolutional neural network models were used to classify these images. Models were trained by using the transfer learning method, with AlexNet and the pre-trained models VGG16 and VGG19. The dataset is divided as 80% training and 20% test. As a result of the performed classifications, the success rates obtained from the AlexNet, VGG16, and VGG19 models are 94.42%, 98.84%, and 98.14%, respectively. Models’ performances were evaluated through sensitivity, specificity, precision, and F-1 score metrics. In addition, ROC curves and AUC values were used in the performance evaluation. The highest classification success was achieved with the VGG16 model. The obtained results reveal that these methods can be used successfully in the determination of pistachio types.
Keywords: pistachio; genetic varieties; machine learning; deep learning; food recognition
https://www.muratkoklu.com/datasets/
Kaggle dataset identifier: pistachio-image-dataset
<jupyter_script>import tensorflow.keras.models as Models
import tensorflow.keras.optimizers as Optimizer
import tensorflow.keras.metrics as Metrics
import tensorflow.keras.utils as Utils
from keras.utils.vis_utils import model_to_dot
import os
import matplotlib.pyplot as plot
import cv2
import numpy as np
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix as CM
from random import randint
from IPython.display import SVG
import matplotlib.gridspec as gridspec
import tensorflow.keras.layers as Layers
import tensorflow.keras.activations as Actications
import tensorflow.keras.models as Models
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import os
# Create an empty dataframe
data = pd.DataFrame(columns=["image_path", "label"])
# Define the labels/classes
labels = {
"/kaggle/input/intel-image-classification/data/seg_pred": "Cloudy",
"/kaggle/input/satellite-image-classification/data/seg_test": "Desert",
"/kaggle/input/satellite-image-classification/data/seg_train": "Green_Area",
}
# Loop over the train, test, and val folders and extract the image path and label
for folder in labels:
for image_name in os.listdir(folder):
image_path = os.path.join(folder, image_name)
label = labels[folder]
data = data.append(
{"image_path": image_path, "label": label}, ignore_index=True
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/698/129698027.ipynb
|
pistachio-image-dataset
|
muratkokludataset
|
[{"Id": 129698027, "ScriptId": 38561420, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 15116768, "CreationDate": "05/15/2023 20:50:10", "VersionNumber": 1.0, "Title": "Intel Image classification with Vision Transformer", "EvaluationDate": "05/15/2023", "IsChange": true, "TotalLines": 46.0, "LinesInsertedFromPrevious": 46.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186026181, "KernelVersionId": 129698027, "SourceDatasetVersionId": 3372753}, {"Id": 186026180, "KernelVersionId": 129698027, "SourceDatasetVersionId": 269359}]
|
[{"Id": 3372753, "DatasetId": 2033813, "DatasourceVersionId": 3424151, "CreatorUserId": 10072866, "LicenseName": "CC BY-NC-SA 4.0", "CreationDate": "03/28/2022 18:01:27", "VersionNumber": 1.0, "Title": "Pistachio Image Dataset", "Slug": "pistachio-image-dataset", "Subtitle": "The dataset includes a total of 2148 images, 1232 of Kirmizi and 916 of Siirt P.", "Description": "DATASET: https://www.muratkoklu.com/datasets/\nCV:https://www.muratkoklu.com/en/publications/\n\nPistachio Image Dataset\nCitation Request :\n\n1. OZKAN IA., KOKLU M. and SARACOGLU R. (2021). Classification of Pistachio Species Using Improved K-NN Classifier. Progress in Nutrition, Vol. 23, N. 2, pp. DOI:10.23751/pn.v23i2.9686. (Open Access) https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/9686/9178\n\n2. SINGH D, TASPINAR YS, KURSUN R, CINAR I, KOKLU M, OZKAN IA, LEE H-N., (2022). Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models, Electronics, 11 (7), 981. https://doi.org/10.3390/electronics11070981. (Open Access) \n\nArticle Download (PDF):\n1: https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/9686/9178\n2: https://doi.org/10.3390/electronics11070981\n\nDATASET: https://www.muratkoklu.com/datasets/\n\nABSTRACT: In order to keep the economic value of pistachio nuts which have an important place in the agricultural economy, the efficiency of post-harvest industrial processes is very important. To provide this efficiency, new methods and technologies are needed for the separation and classification of pistachios. Different pistachio species address different markets, which increases the need for the classification of pistachio species. In this study, it is aimed to develop a classification model different from traditional separation methods, based on image processing and artificial intelligence which are capable to provide the required classification. A computer vision system has been developed to distinguish two different species of pistachios with different characteristics that address different market types. 2148 sample image for these two kinds of pistachios were taken with a high-resolution camera. The image processing techniques, segmentation and feature extraction were applied on the obtained images of the pistachio samples. A pistachio dataset that has sixteen attributes was created. An advanced classifier based on k-NN method, which is a simple and successful classifier, and principal component analysis was designed on the obtained dataset. In this study; a multi-level system including feature extraction, dimension reduction and dimension weighting stages has been proposed. Experimental results showed that the proposed approach achieved a classification success of 94.18%. The presented high-performance classification model provides an important need for the separation of pistachio species and increases the economic value of species. In addition, the developed model is important in terms of its application to similar studies. \n\nKeywords: Classification, Image processing, k nearest neighbor classifier, Pistachio species\n\n\nmorphological Features (12 Features)\n1. Area\n2. Perimeter\n3. Major_Axis\n4. Minor_Axis\n5. Eccentricity\n6. Eqdiasq\n7. Solidity\n8. Convex_Area\n9. Extent\n10. Aspect_Ratio\n11. Roundness\n12. Compactness\n\nShape Features (4 Features)\n13. Shapefactor_1\n14. Shapefactor_2\n15. Shapefactor_3\n16. Shapefactor_4\n\nColor Features (12 Features)\n17. Mean_RR\n18. Mean_RG\n19. Mean_RB\n20. StdDev_RR\n21. StdDev_RG\n22. StdDev_RB\n213. Skew_RR\n24. Skew_RG\n25. Skew_RB\n26. Kurtosis_RR\n27. Kurtosis_RG\n28. Kurtosis_RB\n\n29. Class\n\n\n\n\n2. SINGH D, TASPINAR YS, KURSUN R, CINAR I, KOKLU M, OZKAN IA, LEE H-N., (2022). Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models, Electronics, 11 (7), 981. https://doi.org/10.3390/electronics11070981. (Open Access) \n\nABSTRACT: Pistachio is a shelled fruit from the anacardiaceae family. The homeland of pistachio is the Middle East. The Kirmizi pistachios and Siirt pistachios are the major types grown and exported in Turkey. Since the prices, tastes, and nutritional values of these types differs, the type of pistachio becomes important when it comes to trade. This study aims to identify these two types of pistachios, which are frequently grown in Turkey, by classifying them via convolutional neural networks. Within the scope of the study, images of Kirmizi and Siirt pistachio types were obtained through the computer vision system. The pre-trained dataset includes a total of 2148 images, 1232 of Kirmizi type and 916 of Siirt type. Three different convolutional neural network models were used to classify these images. Models were trained by using the transfer learning method, with AlexNet and the pre-trained models VGG16 and VGG19. The dataset is divided as 80% training and 20% test. As a result of the performed classifications, the success rates obtained from the AlexNet, VGG16, and VGG19 models are 94.42%, 98.84%, and 98.14%, respectively. Models\u2019 performances were evaluated through sensitivity, specificity, precision, and F-1 score metrics. In addition, ROC curves and AUC values were used in the performance evaluation. The highest classification success was achieved with the VGG16 model. The obtained results reveal that these methods can be used successfully in the determination of pistachio types.\n\nKeywords: pistachio; genetic varieties; machine learning; deep learning; food recognition\n\nhttps://www.muratkoklu.com/datasets/", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2033813, "CreatorUserId": 10072866, "OwnerUserId": 10072866.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3372753.0, "CurrentDatasourceVersionId": 3424151.0, "ForumId": 2058686, "Type": 2, "CreationDate": "03/28/2022 18:01:27", "LastActivityDate": "03/28/2022", "TotalViews": 40440, "TotalDownloads": 3439, "TotalVotes": 1563, "TotalKernels": 33}]
|
[{"Id": 10072866, "UserName": "muratkokludataset", "DisplayName": "Murat KOKLU", "RegisterDate": "03/28/2022", "PerformanceTier": 2}]
|
import tensorflow.keras.models as Models
import tensorflow.keras.optimizers as Optimizer
import tensorflow.keras.metrics as Metrics
import tensorflow.keras.utils as Utils
from keras.utils.vis_utils import model_to_dot
import os
import matplotlib.pyplot as plot
import cv2
import numpy as np
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix as CM
from random import randint
from IPython.display import SVG
import matplotlib.gridspec as gridspec
import tensorflow.keras.layers as Layers
import tensorflow.keras.activations as Actications
import tensorflow.keras.models as Models
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import os
# Create an empty dataframe
data = pd.DataFrame(columns=["image_path", "label"])
# Define the labels/classes
labels = {
"/kaggle/input/intel-image-classification/data/seg_pred": "Cloudy",
"/kaggle/input/satellite-image-classification/data/seg_test": "Desert",
"/kaggle/input/satellite-image-classification/data/seg_train": "Green_Area",
}
# Loop over the train, test, and val folders and extract the image path and label
for folder in labels:
for image_name in os.listdir(folder):
image_path = os.path.join(folder, image_name)
label = labels[folder]
data = data.append(
{"image_path": image_path, "label": label}, ignore_index=True
)
| false | 0 | 391 | 0 | 1,927 | 391 |
||
129698673
|
<jupyter_start><jupyter_text>E Commerce Analytics
Kaggle dataset identifier: e-commerce-analytics
<jupyter_script># Importing the libaraies necessary
import numpy as np
import pandas as pd
import seaborn as sns
import datetime as dt
from datetime import timedelta
from matplotlib import pyplot as plt
import plotly.offline as pyoff
import plotly.graph_objs as go
import plotly.express as px
import scipy.stats as stats
import os
import re
import warnings
warnings.filterwarnings("ignore")
# Checking the current directory
# importin the necessary datasets
cust_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/CUSTOMERS.csv")
seller_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/SELLERS.csv")
product_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/PRODUCTS.csv")
order_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDERS.csv")
order_item = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDER_ITEMS.csv")
payment_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDER_PAYMENTS.csv")
ratings = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDER_REVIEW_RATINGS.csv")
location = pd.read_csv(r"/kaggle/input/e-commerce-analytics/GEO_LOCATION.csv")
# ### Performimg EDA on the Datasets
# #### 1. cust_info
cust_info.info(), cust_info.isnull().sum()
# #### 2. Seller_info
seller_info.info(), seller_info.isnull().sum()
## Replacing the null values with the Mode. (Cannot drop the null values as seller_id have created order in order_info)
for col in seller_info.columns:
if seller_info[col].dtype == "object":
seller_info[col].fillna(seller_info[col].mode()[0], inplace=True)
# #### 3. Product_info
product_info.info(), product_info.isnull().sum()
# Replacing the null values with mean & mode for data type object & float (To avoid data loss)
for col in product_info.columns:
if product_info[col].dtype == "object":
product_info[col].fillna(product_info[col].mode()[0], inplace=True)
else:
product_info[col].fillna(product_info[col].mean(), inplace=True)
# #### 4. Order_info
order_info.info(), order_info.isnull().sum()
## Converting the date columns to datetime format
order_info["order_approved_at"] = pd.to_datetime(order_info["order_approved_at"])
order_info["order_purchase_timestamp"] = pd.to_datetime(
order_info["order_purchase_timestamp"]
)
order_info["order_delivered_carrier_date"] = pd.to_datetime(
order_info["order_delivered_carrier_date"]
)
order_info["order_delivered_customer_date"] = pd.to_datetime(
order_info["order_delivered_customer_date"]
)
order_info["order_estimated_delivery_date"] = pd.to_datetime(
order_info["order_estimated_delivery_date"]
)
# ### 5. order_item
order_item.info(), order_item.isnull().sum()
## Converting the date column to datetime format
order_item["shipping_limit_date"] = pd.to_datetime(order_item["shipping_limit_date"])
# ### 6. Payment_info
payment_info.info(), payment_info.isnull().sum()
# ### 7. Ratings
ratings.info(), ratings.isnull().sum()
## Converting the date column to datetime format
ratings["review_creation_date"] = pd.to_datetime(ratings["review_creation_date"])
ratings["review_answer_timestamp"] = pd.to_datetime(ratings["review_answer_timestamp"])
# ### 8. locations
location.info(), location.isnull().sum()
# #### Define & calculate high level metrics like (Total Revenue, Total quantity, Total products, Total categories, Total sellers, Total locations, Total channels, Total payment methods etc…)
## The below details comprises of information asked above
print("The total number of customer is", cust_info["customer_unique_id"].nunique())
print("The total number of sellers are", seller_info["seller_id"].nunique())
print("The total number of product are", product_info["product_id"].nunique())
print(
"the total number of product category are",
product_info["product_category_name"].nunique(),
)
print("the total number of cities are", cust_info["customer_city"].nunique())
print("The total number of states are", cust_info["customer_state"].nunique())
print("Total revenue generated is", payment_info["payment_value"].sum())
print("Total number of payment channel are", payment_info["payment_type"].nunique())
print("The total quantity ordered is", len(order_item["order_id"]))
# #### Understanding how many new customers acquired every month
# Merging the customer and order table under "Merge_data"
Merge_data = pd.merge(left=order_info, right=cust_info, how="inner", on="customer_id")
# dropping the duplicates
Merge_data.drop_duplicates(subset="customer_unique_id", keep="first", inplace=True)
# Sorting the data by year & Month, then counting customer accuried every month under "New_cust"
New_cust = Merge_data.groupby(
by=[
Merge_data["order_purchase_timestamp"].dt.year,
Merge_data["order_purchase_timestamp"].dt.month,
]
)["customer_unique_id"].count()
# Setting the background colour
ax = plt.axes()
ax.set_facecolor("papayawhip")
# Plotting the bar graph
New_cust.plot(kind="bar", figsize=(18, 8))
plt.xlabel("Month & Year")
plt.ylabel("No. of cust")
plt.title("No. of New cust accuried every month")
plt.show()
# #### Understand the retention of customers on month on month basis
# Merging the customer and order table under "Merger"
Merger = pd.merge(left=order_info, right=cust_info, how="inner", on="customer_id")
# Sorting the data by year & Month, then counting customer accuried every month under "cust"
cust = Merger.groupby(
by=[
Merger["order_purchase_timestamp"].dt.year,
Merger["order_purchase_timestamp"].dt.month,
]
)["customer_unique_id"].count()
# Chnaging them into dataframe
pd.DataFrame(New_cust)
pd.DataFrame(cust)
# Joining both the datsets by concat under "final"
final = pd.concat([New_cust, cust], axis=1)
# setting the column names to "A" & "B"
final = final.set_axis(["A", "B"], axis="columns")
# Getting the number of customer retained every month
final["Retained_cust"] = final["B"] - final["A"]
# Setting the background colour
ax = plt.axes()
ax.set_facecolor("papayawhip")
# Plotting the bar graph
final["Retained_cust"].plot(kind="bar", figsize=(18, 8))
plt.xlabel("Year & Month")
plt.ylabel("No. of cust")
plt.title("No. of cust retained every month")
plt.show()
# #### How the revenues from existing/new customers on month on month basis
# Merging the datasets under "xyz"
xyz = pd.merge(left=payment_info, right=order_info, how="left", on="order_id")
# Dropping the columns that are not necessary
xyz.drop(
columns=[
"payment_sequential",
"payment_type",
"payment_installments",
"order_status",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
],
inplace=True,
)
# Creating a function that will tag the customer as "New" or "Existing"
def tag_customers(df):
# Sort the dataframe by transaction date
df.sort_values("order_purchase_timestamp", inplace=True)
# Create an empty list to store the customer types
customer_types = []
# Initialize a dictionary to store the last transaction date for each customer
last_transaction_dates = {}
# Loop through each row of the dataframe
for index, row in df.iterrows():
# Get the customer id and transaction date for the current row
customer_id = row["customer_id"]
transaction_date = row["order_purchase_timestamp"]
# Check if the customer has made a transaction before
if customer_id in last_transaction_dates:
# Get the last transaction date for the customer
last_transaction_date = last_transaction_dates[customer_id]
# Check if the last transaction was in the same month as the current transaction
if (
last_transaction_date.year == transaction_date.year
and last_transaction_date.month == transaction_date.month
):
# The customer is an existing customer for this month
customer_type = "existing"
else:
# The customer is a new customer for this month
customer_type = "new"
else:
# The customer is a new customer for the first transaction
customer_type = "new"
# Add the customer type to the list
customer_types.append(customer_type)
# Update the last transaction date for the customer
last_transaction_dates[customer_id] = transaction_date
# Add the customer types as a new column to the dataframe
df["customer_type"] = customer_types
return df
# applying the function on the desired dataset
tag_customers(xyz)
# Sorting the values as needed & saving it under "abc"
abc = xyz.groupby(
by=[
xyz["order_purchase_timestamp"].dt.year,
xyz["order_purchase_timestamp"].dt.month,
"customer_type",
]
)["payment_value"].sum()
# converting it to a dataframe under "pqr"
pqr = pd.DataFrame(abc)
# Plotting the bar graph
fig = pqr.plot(kind="bar", figsize=(18, 8), width=0.95)
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Year & Month")
plt.ylabel("Amount Spent")
plt.title("Amount Spent by Existing & New Customer")
plt.show()
# Method 2 By using pivot table
d = xyz.pivot_table(
columns="customer_type",
index=[
xyz["order_purchase_timestamp"].dt.year,
xyz["order_purchase_timestamp"].dt.month,
],
values="payment_value",
aggfunc="sum",
)
# Plotting using Line chart
fig = d.plot(kind="line", figsize=(20, 8))
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Year & Month")
plt.ylabel("Amount Spent")
plt.title("Amount Spent by Existing & New Customer")
plt.show()
# #### Understand the trends/seasonality of sales, quantity by category, location, month, week, day, time, channel, payment method etc…
# ##### Revenue by Month
order_item["Total_price"] = order_item.price + order_item.freight_value
x = order_item.pivot_table(
index=order_item["shipping_limit_date"].dt.month,
values="Total_price",
aggfunc="sum",
).reset_index()
# Plotting the bar graph
fig = x.plot(kind="bar", x="shipping_limit_date", figsize=(18, 8), width=0.95)
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Month")
plt.ylabel("Total Revenue")
plt.title("Revenue by Month ")
plt.show()
# ##### Quantity by category
# Merging the datasets & getting the desired information
qty_data = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
grouped_data = (
qty_data.groupby(by="product_category_name")["order_id"].count().reset_index()
)
# Plotting the bar graph
plot = px.bar(
grouped_data,
x="order_id",
y="product_category_name",
title="Total Quantity sold by Category",
width=1000,
height=1200,
orientation="h",
)
plot.update_layout(xaxis_title="No. of Quantity", yaxis_title="product_category_name")
plot.show("svg")
# ##### Quantity by location
# ##### By City
# Merging the needed dataset
xyz = pd.merge(left=order_info, right=cust_info, on="customer_id", how="left")
# Grouping up the data by City & cust_id for "Top 100 cities by order placed"
uvw = (
xyz.groupby(by=["customer_state", "customer_city"])["order_id"]
.count()
.reset_index()
.nlargest(100, columns="order_id")
)
# plotting the above analyis in Bar Graph
plot = px.bar(
uvw,
x="order_id",
y="customer_city",
title="Top 100 City By Count of Order Placed",
width=1000,
height=1200,
orientation="h",
)
plot.update_layout(xaxis_title="Count of Order Placed", yaxis_title="City")
plot.show("svg")
# ##### By State
# Grouping up the data by City & cust_id for "order placed by state"
pqr = xyz.groupby(by="customer_state")["order_id"].count().reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
pqr,
x="customer_state",
y="order_id",
title="Count of Order Placed by State",
color="customer_state",
width=1000,
height=700,
)
plot.update_layout(xaxis_title="State", yaxis_title="Count of Order Placed")
plot.show("svg")
# #### Revenue Generated by Days
# Grouping up the data by Day & Total Price for "Revenue Generated Day wise"
by_day = order_item.pivot_table(
index=order_item["shipping_limit_date"].dt.day, values="Total_price", aggfunc="sum"
).reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
by_day,
x="shipping_limit_date",
y="Total_price",
title="Revenue Genearted by Days",
color="shipping_limit_date",
width=1200,
height=600,
)
plot.update_layout(xaxis_title="Days", yaxis_title="Revenue")
plot.show("svg")
# #### Revenue Generated by Week
# Grouping up the data by Day & Total Price for "Revenue Generated Week wise"
by_week = order_item.pivot_table(
index=[
order_item["shipping_limit_date"].dt.year,
order_item["shipping_limit_date"].dt.weekday,
],
values="Total_price",
aggfunc="sum",
)
# plotting the above analyis in Bar Graph
fig = by_week.plot(kind="bar", figsize=(20, 12), width=0.90)
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Weeks by Year")
plt.ylabel("Total Revenue")
plt.title("Revenue by Weeks ")
plt.show()
# #### Revenue Generated by Hour of Day
# Grouping up the data by Day & Total Price for "Revenue Generated By Hour of Day"
by_Hour = order_item.pivot_table(
index=order_item["shipping_limit_date"].dt.hour, values="Total_price", aggfunc="sum"
).reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
by_Hour,
x="shipping_limit_date",
y="Total_price",
title="Revenue Generated on Hourly Basis",
color="shipping_limit_date",
width=1200,
height=600,
)
plot.update_layout(xaxis_title="Hour of Day", yaxis_title="Revenue")
plot.show("svg")
# #### By Channel and Payment Method
# Grouping up the data by payment channel & Total revenue for "Total Revenue by Payment Channel"
by_Channel = payment_info.pivot_table(
index=payment_info["payment_type"], values="payment_value", aggfunc="sum"
).reset_index()
# plotting the above analyis in Bar Graph & Pie chart
plot = px.bar(
by_Channel,
x="payment_type",
y="payment_value",
title="Total Revenue by Payment Channel",
width=1000,
height=500,
)
plot.update_layout(xaxis_title="Payment Channel", yaxis_title="Revenue")
plot.show("svg")
fig = px.pie(
by_Channel,
names="payment_type",
values="payment_value",
title="Total Revenue by Payment Channel",
)
fig.show("svg")
# #### Popular Product by Month
# Merging the required Dataset
qty_data = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
# Getting and sorting the data needed
pqr = (
qty_data.groupby(by=[qty_data["shipping_limit_date"].dt.month, "product_id"])[
"order_id"
]
.count()
.reset_index()
)
abc = (
pqr.groupby("shipping_limit_date")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in stacked Bar Graph
plot = px.bar(
abc,
x="shipping_limit_date",
y="order_id",
color="product_id",
title="Top 3 product by Each Month",
width=1400,
height=1000,
)
plot.update_layout(xaxis_title="Month", yaxis_title="Total_count")
plot.show("svg")
# #### Popular Product by seller
# Getting and sorting the data needed
by_seller = (
order_item.groupby(by=["seller_id", "product_id"])["order_id"].count().reset_index()
)
by_seller
# #### Popular Product by Month
prod_Month = (
order_item.groupby(by=[order_item["shipping_limit_date"].dt.month, "product_id"])[
"order_id"
]
.count()
.reset_index()
)
pop_prod_Month = (
prod_Month.groupby("shipping_limit_date")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
pop_prod_Month
# #### Popular Product by State
# Getting and sorting the data needed
prod_state = pd.merge(left=order_item, right=seller_info, on="seller_id", how="left")
pop_prod = (
prod_state.groupby(by=["seller_state", "product_id"])["order_id"]
.count()
.reset_index()
)
pop_prod_state = (
pop_prod.groupby("seller_state")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
pop_prod_state,
x="order_id",
y="product_id",
color="seller_state",
title="Top 3 popular product by State",
width=1200,
height=1000,
orientation="h",
)
plot.update_layout(xaxis_title="Count", yaxis_title="Product", paper_bgcolor="cornsilk")
plot.show("svg")
# #### Popular Product by Product category
# Getting and sorting the data needed
prod_cat = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
xyz = (
prod_cat.groupby(by=["product_category_name", "product_id"])["order_id"]
.count()
.reset_index()
)
pop_prod_cat = (
xyz.groupby("product_category_name")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
fig = px.bar(
pop_prod_cat,
x="order_id",
y="product_id",
color="product_category_name",
title="Top 3 popular product by Category",
width=1000,
height=1600,
orientation="h",
)
fig.update_layout(xaxis_title="Count", yaxis_title="Product")
fig.update_layout(legend=dict(bgcolor="lightblue"), paper_bgcolor="seashell")
fig.show("svg")
# #### Popular Categories by state
# Getting and sorting the data needed
a = pd.merge(left=prod_state, right=product_info, on="product_id", how="left")
a.drop(
columns=[
"product_photos_qty",
"product_weight_g",
"price",
"freight_value",
"Total_price",
"seller_zip_code_prefix",
"product_name_lenght",
"product_description_lenght",
"product_length_cm",
"product_height_cm",
"product_width_cm",
],
inplace=True,
)
group_data = (
a.groupby(by=["seller_state", "product_category_name"])["order_id"]
.count()
.reset_index()
)
pop_category = (
group_data.groupby("seller_state")
.apply(lambda x: x.nlargest(1, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
pop_category,
x="seller_state",
y="order_id",
color="product_category_name",
title="Most popular Category by State",
width=1000,
height=800,
)
plot.update_layout(xaxis_title="State", yaxis_title="Count", paper_bgcolor="cornsilk")
plot.show("svg")
# #### Popular Categories by Month
# Getting and sorting the data needed
by_month = (
a.groupby(by=[a["shipping_limit_date"].dt.month, "product_category_name"])[
"order_id"
]
.count()
.reset_index()
)
Monthly = (
by_month.groupby("shipping_limit_date")
.apply(lambda x: x.nlargest(2, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Monthly,
x="order_id",
y="shipping_limit_date",
color="product_category_name",
title="Top 2 popular category by Month",
width=1200,
height=800,
orientation="h",
)
plot.update_layout(xaxis_title="Count", yaxis_title="Month", paper_bgcolor="cornsilk")
plot.show("svg")
# #### List top 10 most expensive products sorted by price
# Getting and sorting the data needed
For_Price = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
For_Price.drop(
columns=[
"freight_value",
"price",
"order_id",
"seller_id",
"shipping_limit_date",
"order_item_id",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
],
inplace=True,
)
For_Price.groupby(by=["product_id", "product_category_name"])["Total_price"]
For_Price.nlargest(10, columns="Total_price").reset_index()
# #### Divide the customers into groups based on the revenue generated
# Getting and sorting the data needed
segment = pd.merge(left=order_item, right=order_info, on="order_id", how="left")
segment.drop(
columns=[
"order_item_id",
"seller_id",
"order_status",
"order_purchase_timestamp",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
],
inplace=True,
)
seg_cust = segment.groupby(by="customer_id")["Total_price"].sum().reset_index()
bins = [0, 2000, 4000, 6000, 8000, 10000, 12000, 14000]
labels = [
"0 - 2000",
"2000 - 4000",
"4000 - 6000",
"6000 - 8000",
"8000 - 10000",
"10000 - 12000",
"12000 - 14000",
]
seg_cust["customer_seg"] = pd.cut(
seg_cust.Total_price, bins, labels=labels, include_lowest=True
)
plot_data = seg_cust.groupby(by="customer_seg")["customer_id"].count().reset_index()
plot_data
# #### Divide the Sellers into groups based on the revenue generated
# Getting and sorting the data needed
segment = pd.merge(left=order_item, right=order_info, on="order_id", how="left")
segment.drop(
columns=[
"order_item_id",
"customer_id",
"order_status",
"order_purchase_timestamp",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
],
inplace=True,
)
seg_seller = segment.groupby(by="seller_id")["Total_price"].sum().reset_index()
bins = [0, 50000, 100000, 150000, 200000, 250000]
labels = ["Poor", "Fair", "Good", "Best", "Excellent"]
seg_seller["Seller_seg"] = pd.cut(
seg_seller.Total_price, bins, labels=labels, include_lowest=True
)
plot_data = seg_seller.groupby(by="Seller_seg")["seller_id"].count().reset_index()
plot_data
# plotting the above analyis in Bar Graph
plot = px.bar(
plot_data,
x="seller_id",
y="Seller_seg",
color="Seller_seg",
title="Seller count by Seller Segment",
width=800,
height=600,
orientation="h",
)
plot.update_layout(
xaxis_title="seller Count", yaxis_title="Seller Segment", paper_bgcolor="cornsilk"
)
plot.show("svg")
# #### Cross-Selling (Which products are selling together)
# ##### Hint: We need to find which of the top 10 combinations of products are selling together in each transaction. (combination of 2 or 3 buying together)
# Getting and sorting the data needed
D = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
E = D[D["order_id"].duplicated(keep=False)]
E["Cat_Group"] = E.groupby(by="order_id")["product_category_name"].transform(
lambda x: " & ".join(map(str, x))
)
E = E[["order_id", "Cat_Group"]].drop_duplicates()
F = (
E.groupby(by="Cat_Group")
.count()
.reset_index()
.sort_values("order_id", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
F,
x="Cat_Group",
y="order_id",
color="Cat_Group",
title="Top 10 Product Category Sold Together",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Category Group", yaxis_title="Count", paper_bgcolor="cornsilk"
)
plot.show("svg")
# #### How customers are paying?
# Getting and sorting the data needed
H = payment_info.groupby(by="payment_installments")["order_id"].count().reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
H,
x="payment_installments",
y="order_id",
color="payment_installments",
title="payment pattern of customers",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="No. of Installments",
yaxis_title="Count of cust",
paper_bgcolor="cornsilk",
)
plot.show("svg")
# #### Which payment channels are used by most customers?
# Getting and sorting the data needed
H = payment_info.groupby(by="payment_type")["order_id"].count().reset_index()
# plotting the above analyis in Pie Chart
fig = px.pie(
H,
names="payment_type",
values="order_id",
title="Total Revenue by Payment Channel",
color_discrete_sequence=px.colors.sequential.Pinkyl_r,
)
fig.show("svg")
# #### Which categories (top 10) are maximum rated & minimum rated?
# Getting and sorting the data needed
G = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
J = pd.merge(left=G, right=ratings, on="order_id", how="left")
J.drop(
columns=[
"order_item_id",
"seller_id",
"price",
"freight_value",
"Total_price",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
"review_answer_timestamp",
"review_creation_date",
],
inplace=True,
)
J = J.drop_duplicates(subset="review_id", keep="first")
J = J.dropna(axis=0)
# ##### Top 10 by Max ratings
# Getting and sorting the data needed
prod_Max_Rating = J.loc[J["review_score"] == 5]
Top_10 = (
prod_Max_Rating.groupby(by=["product_category_name"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Top_10,
x="product_category_name",
y="review_score",
color="product_category_name",
title="Top 10 category by Max Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Categories",
yaxis_title='Count of Max Rating "5" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# ##### Top 10 by Min ratings
# Getting and sorting the data needed
prod_Min_Rating = J.loc[J["review_score"] == 1]
Bottom_10 = (
prod_Min_Rating.groupby(by=["product_category_name"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Bottom_10,
x="product_category_name",
y="review_score",
color="product_category_name",
title="Top 10 category by Min Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Categories",
yaxis_title='Count of Min Rating "1" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# #### Which products (top10) are maximum rated & minimum rated?
# ##### Top 10 by Max Ratings
# Getting and sorting the data needed
Max_prod_Rating = J.loc[J["review_score"] == 5]
Top_10 = (
Max_prod_Rating.groupby(by=["product_id"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Top_10,
x="product_id",
y="review_score",
color="product_id",
title="Top 10 Product by Max Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="product_id",
yaxis_title='Count of Max Rating "5" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# ##### Top 10 by Min Ratings
# Getting and sorting the data needed
Min_prod_Rating = J.loc[J["review_score"] == 1]
Lowest_10 = (
Min_prod_Rating.groupby(by=["product_id"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Lowest_10,
x="product_id",
y="review_score",
color="product_id",
title="Top 10 Product by Min Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="product_id",
yaxis_title='Count of Min Rating "1" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# #### Average rating by location, seller, product, category, month etc.
# ##### Average Ratings by Location
# ###### By City
# Getting and sorting the data needed
L = pd.merge(left=order_info, right=cust_info, on="customer_id", how="left")
N = pd.merge(left=L, right=ratings, on="order_id", how="left")
N.drop(
columns=[
"customer_id",
"order_status",
"order_purchase_timestamp",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
"customer_unique_id",
"customer_zip_code_prefix",
"review_answer_timestamp",
],
inplace=True,
)
# Getting and sorting the data needed
P = (
N.groupby(by="customer_city")["review_score"]
.mean()
.reset_index()
.sort_values(by="customer_city")
)
P.rename(columns={"customer_city": "City", "review_score": "Avg_review"}, inplace=True)
P
# ###### By State
# Getting and sorting the data needed
Q = (
N.groupby(by="customer_state")["review_score"]
.mean()
.reset_index()
.sort_values(by="customer_state")
)
Q.rename(
columns={"customer_state": "State", "review_score": "Avg_review"}, inplace=True
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Q,
x="Avg_review",
y="State",
color="State",
title="Average Rating by State",
width=1000,
height=700,
orientation="h",
)
plot.update_layout(
xaxis_title="Average Rating", yaxis_title="State", paper_bgcolor="cornsilk"
)
plot.show("svg")
# ##### Average Ratings by Seller
# Getting and sorting the data needed
O = pd.merge(left=order_item, right=ratings, on="order_id", how="left")
W = O.groupby(by="seller_id")["review_score"].mean().reset_index()
W.rename(
columns={"customer_state": "State", "review_score": "Avg_review"}, inplace=True
)
W
# ##### Average Ratings by Product
# Getting and sorting the data needed
S = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
T = pd.merge(left=S, right=ratings, on="order_id", how="left")
T.drop(
columns=[
"order_id",
"order_item_id",
"seller_id",
"price",
"freight_value",
"Total_price",
"shipping_limit_date",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
"review_creation_date",
],
inplace=True,
)
U = T.groupby(by="product_id")["review_score"].mean().reset_index()
U.rename(columns={"product_id": "Product", "review_score": "Avg_review"}, inplace=True)
U
# ##### Average Ratings by Category
# Getting and sorting the data needed
V = T.groupby(by="product_category_name")["review_score"].mean().reset_index()
V.rename(
columns={"product_category_name": "Category", "review_score": "Avg_review"},
inplace=True,
)
# plotting the above analyis in Bar Graph
plot = px.bar(
V,
x="Avg_review",
y="Category",
color="Category",
title="Average rating by category",
width=1000,
height=1200,
)
plot.update_layout(
xaxis_title="Avg_review", yaxis_title="Category ", paper_bgcolor="cornsilk"
)
plot.show("svg")
# ##### Average Ratings by Month
# Getting and sorting the data needed
Z = (
T.groupby(by=T["review_answer_timestamp"].dt.month)["review_score"]
.mean()
.reset_index()
)
Z.rename(
columns={"review_answer_timestamp": "Month", "review_score": "Avg_review"},
inplace=True,
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Z,
x="Month",
y="Avg_review",
color="Month",
title="Average rating by Month",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Month", yaxis_title="Avg_review ", paper_bgcolor="cornsilk"
)
plot.show("svg")
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/698/129698673.ipynb
|
e-commerce-analytics
|
singhpriyanshu29
|
[{"Id": 129698673, "ScriptId": 38569724, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10196324, "CreationDate": "05/15/2023 20:59:00", "VersionNumber": 2.0, "Title": "E_Commerce_Analyticss", "EvaluationDate": "05/15/2023", "IsChange": false, "TotalLines": 941.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 941.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186027117, "KernelVersionId": 129698673, "SourceDatasetVersionId": 5693604}]
|
[{"Id": 5693604, "DatasetId": 3273704, "DatasourceVersionId": 5769225, "CreatorUserId": 10196324, "LicenseName": "Unknown", "CreationDate": "05/15/2023 20:49:41", "VersionNumber": 1.0, "Title": "E Commerce Analytics", "Slug": "e-commerce-analytics", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3273704, "CreatorUserId": 10196324, "OwnerUserId": 10196324.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5693604.0, "CurrentDatasourceVersionId": 5769225.0, "ForumId": 3339359, "Type": 2, "CreationDate": "05/15/2023 20:49:41", "LastActivityDate": "05/15/2023", "TotalViews": 113, "TotalDownloads": 22, "TotalVotes": 0, "TotalKernels": 2}]
|
[{"Id": 10196324, "UserName": "singhpriyanshu29", "DisplayName": "Singhpriyanshu", "RegisterDate": "04/09/2022", "PerformanceTier": 0}]
|
# Importing the libaraies necessary
import numpy as np
import pandas as pd
import seaborn as sns
import datetime as dt
from datetime import timedelta
from matplotlib import pyplot as plt
import plotly.offline as pyoff
import plotly.graph_objs as go
import plotly.express as px
import scipy.stats as stats
import os
import re
import warnings
warnings.filterwarnings("ignore")
# Checking the current directory
# importin the necessary datasets
cust_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/CUSTOMERS.csv")
seller_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/SELLERS.csv")
product_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/PRODUCTS.csv")
order_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDERS.csv")
order_item = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDER_ITEMS.csv")
payment_info = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDER_PAYMENTS.csv")
ratings = pd.read_csv(r"/kaggle/input/e-commerce-analytics/ORDER_REVIEW_RATINGS.csv")
location = pd.read_csv(r"/kaggle/input/e-commerce-analytics/GEO_LOCATION.csv")
# ### Performimg EDA on the Datasets
# #### 1. cust_info
cust_info.info(), cust_info.isnull().sum()
# #### 2. Seller_info
seller_info.info(), seller_info.isnull().sum()
## Replacing the null values with the Mode. (Cannot drop the null values as seller_id have created order in order_info)
for col in seller_info.columns:
if seller_info[col].dtype == "object":
seller_info[col].fillna(seller_info[col].mode()[0], inplace=True)
# #### 3. Product_info
product_info.info(), product_info.isnull().sum()
# Replacing the null values with mean & mode for data type object & float (To avoid data loss)
for col in product_info.columns:
if product_info[col].dtype == "object":
product_info[col].fillna(product_info[col].mode()[0], inplace=True)
else:
product_info[col].fillna(product_info[col].mean(), inplace=True)
# #### 4. Order_info
order_info.info(), order_info.isnull().sum()
## Converting the date columns to datetime format
order_info["order_approved_at"] = pd.to_datetime(order_info["order_approved_at"])
order_info["order_purchase_timestamp"] = pd.to_datetime(
order_info["order_purchase_timestamp"]
)
order_info["order_delivered_carrier_date"] = pd.to_datetime(
order_info["order_delivered_carrier_date"]
)
order_info["order_delivered_customer_date"] = pd.to_datetime(
order_info["order_delivered_customer_date"]
)
order_info["order_estimated_delivery_date"] = pd.to_datetime(
order_info["order_estimated_delivery_date"]
)
# ### 5. order_item
order_item.info(), order_item.isnull().sum()
## Converting the date column to datetime format
order_item["shipping_limit_date"] = pd.to_datetime(order_item["shipping_limit_date"])
# ### 6. Payment_info
payment_info.info(), payment_info.isnull().sum()
# ### 7. Ratings
ratings.info(), ratings.isnull().sum()
## Converting the date column to datetime format
ratings["review_creation_date"] = pd.to_datetime(ratings["review_creation_date"])
ratings["review_answer_timestamp"] = pd.to_datetime(ratings["review_answer_timestamp"])
# ### 8. locations
location.info(), location.isnull().sum()
# #### Define & calculate high level metrics like (Total Revenue, Total quantity, Total products, Total categories, Total sellers, Total locations, Total channels, Total payment methods etc…)
## The below details comprises of information asked above
print("The total number of customer is", cust_info["customer_unique_id"].nunique())
print("The total number of sellers are", seller_info["seller_id"].nunique())
print("The total number of product are", product_info["product_id"].nunique())
print(
"the total number of product category are",
product_info["product_category_name"].nunique(),
)
print("the total number of cities are", cust_info["customer_city"].nunique())
print("The total number of states are", cust_info["customer_state"].nunique())
print("Total revenue generated is", payment_info["payment_value"].sum())
print("Total number of payment channel are", payment_info["payment_type"].nunique())
print("The total quantity ordered is", len(order_item["order_id"]))
# #### Understanding how many new customers acquired every month
# Merging the customer and order table under "Merge_data"
Merge_data = pd.merge(left=order_info, right=cust_info, how="inner", on="customer_id")
# dropping the duplicates
Merge_data.drop_duplicates(subset="customer_unique_id", keep="first", inplace=True)
# Sorting the data by year & Month, then counting customer accuried every month under "New_cust"
New_cust = Merge_data.groupby(
by=[
Merge_data["order_purchase_timestamp"].dt.year,
Merge_data["order_purchase_timestamp"].dt.month,
]
)["customer_unique_id"].count()
# Setting the background colour
ax = plt.axes()
ax.set_facecolor("papayawhip")
# Plotting the bar graph
New_cust.plot(kind="bar", figsize=(18, 8))
plt.xlabel("Month & Year")
plt.ylabel("No. of cust")
plt.title("No. of New cust accuried every month")
plt.show()
# #### Understand the retention of customers on month on month basis
# Merging the customer and order table under "Merger"
Merger = pd.merge(left=order_info, right=cust_info, how="inner", on="customer_id")
# Sorting the data by year & Month, then counting customer accuried every month under "cust"
cust = Merger.groupby(
by=[
Merger["order_purchase_timestamp"].dt.year,
Merger["order_purchase_timestamp"].dt.month,
]
)["customer_unique_id"].count()
# Chnaging them into dataframe
pd.DataFrame(New_cust)
pd.DataFrame(cust)
# Joining both the datsets by concat under "final"
final = pd.concat([New_cust, cust], axis=1)
# setting the column names to "A" & "B"
final = final.set_axis(["A", "B"], axis="columns")
# Getting the number of customer retained every month
final["Retained_cust"] = final["B"] - final["A"]
# Setting the background colour
ax = plt.axes()
ax.set_facecolor("papayawhip")
# Plotting the bar graph
final["Retained_cust"].plot(kind="bar", figsize=(18, 8))
plt.xlabel("Year & Month")
plt.ylabel("No. of cust")
plt.title("No. of cust retained every month")
plt.show()
# #### How the revenues from existing/new customers on month on month basis
# Merging the datasets under "xyz"
xyz = pd.merge(left=payment_info, right=order_info, how="left", on="order_id")
# Dropping the columns that are not necessary
xyz.drop(
columns=[
"payment_sequential",
"payment_type",
"payment_installments",
"order_status",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
],
inplace=True,
)
# Creating a function that will tag the customer as "New" or "Existing"
def tag_customers(df):
# Sort the dataframe by transaction date
df.sort_values("order_purchase_timestamp", inplace=True)
# Create an empty list to store the customer types
customer_types = []
# Initialize a dictionary to store the last transaction date for each customer
last_transaction_dates = {}
# Loop through each row of the dataframe
for index, row in df.iterrows():
# Get the customer id and transaction date for the current row
customer_id = row["customer_id"]
transaction_date = row["order_purchase_timestamp"]
# Check if the customer has made a transaction before
if customer_id in last_transaction_dates:
# Get the last transaction date for the customer
last_transaction_date = last_transaction_dates[customer_id]
# Check if the last transaction was in the same month as the current transaction
if (
last_transaction_date.year == transaction_date.year
and last_transaction_date.month == transaction_date.month
):
# The customer is an existing customer for this month
customer_type = "existing"
else:
# The customer is a new customer for this month
customer_type = "new"
else:
# The customer is a new customer for the first transaction
customer_type = "new"
# Add the customer type to the list
customer_types.append(customer_type)
# Update the last transaction date for the customer
last_transaction_dates[customer_id] = transaction_date
# Add the customer types as a new column to the dataframe
df["customer_type"] = customer_types
return df
# applying the function on the desired dataset
tag_customers(xyz)
# Sorting the values as needed & saving it under "abc"
abc = xyz.groupby(
by=[
xyz["order_purchase_timestamp"].dt.year,
xyz["order_purchase_timestamp"].dt.month,
"customer_type",
]
)["payment_value"].sum()
# converting it to a dataframe under "pqr"
pqr = pd.DataFrame(abc)
# Plotting the bar graph
fig = pqr.plot(kind="bar", figsize=(18, 8), width=0.95)
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Year & Month")
plt.ylabel("Amount Spent")
plt.title("Amount Spent by Existing & New Customer")
plt.show()
# Method 2 By using pivot table
d = xyz.pivot_table(
columns="customer_type",
index=[
xyz["order_purchase_timestamp"].dt.year,
xyz["order_purchase_timestamp"].dt.month,
],
values="payment_value",
aggfunc="sum",
)
# Plotting using Line chart
fig = d.plot(kind="line", figsize=(20, 8))
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Year & Month")
plt.ylabel("Amount Spent")
plt.title("Amount Spent by Existing & New Customer")
plt.show()
# #### Understand the trends/seasonality of sales, quantity by category, location, month, week, day, time, channel, payment method etc…
# ##### Revenue by Month
order_item["Total_price"] = order_item.price + order_item.freight_value
x = order_item.pivot_table(
index=order_item["shipping_limit_date"].dt.month,
values="Total_price",
aggfunc="sum",
).reset_index()
# Plotting the bar graph
fig = x.plot(kind="bar", x="shipping_limit_date", figsize=(18, 8), width=0.95)
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Month")
plt.ylabel("Total Revenue")
plt.title("Revenue by Month ")
plt.show()
# ##### Quantity by category
# Merging the datasets & getting the desired information
qty_data = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
grouped_data = (
qty_data.groupby(by="product_category_name")["order_id"].count().reset_index()
)
# Plotting the bar graph
plot = px.bar(
grouped_data,
x="order_id",
y="product_category_name",
title="Total Quantity sold by Category",
width=1000,
height=1200,
orientation="h",
)
plot.update_layout(xaxis_title="No. of Quantity", yaxis_title="product_category_name")
plot.show("svg")
# ##### Quantity by location
# ##### By City
# Merging the needed dataset
xyz = pd.merge(left=order_info, right=cust_info, on="customer_id", how="left")
# Grouping up the data by City & cust_id for "Top 100 cities by order placed"
uvw = (
xyz.groupby(by=["customer_state", "customer_city"])["order_id"]
.count()
.reset_index()
.nlargest(100, columns="order_id")
)
# plotting the above analyis in Bar Graph
plot = px.bar(
uvw,
x="order_id",
y="customer_city",
title="Top 100 City By Count of Order Placed",
width=1000,
height=1200,
orientation="h",
)
plot.update_layout(xaxis_title="Count of Order Placed", yaxis_title="City")
plot.show("svg")
# ##### By State
# Grouping up the data by City & cust_id for "order placed by state"
pqr = xyz.groupby(by="customer_state")["order_id"].count().reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
pqr,
x="customer_state",
y="order_id",
title="Count of Order Placed by State",
color="customer_state",
width=1000,
height=700,
)
plot.update_layout(xaxis_title="State", yaxis_title="Count of Order Placed")
plot.show("svg")
# #### Revenue Generated by Days
# Grouping up the data by Day & Total Price for "Revenue Generated Day wise"
by_day = order_item.pivot_table(
index=order_item["shipping_limit_date"].dt.day, values="Total_price", aggfunc="sum"
).reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
by_day,
x="shipping_limit_date",
y="Total_price",
title="Revenue Genearted by Days",
color="shipping_limit_date",
width=1200,
height=600,
)
plot.update_layout(xaxis_title="Days", yaxis_title="Revenue")
plot.show("svg")
# #### Revenue Generated by Week
# Grouping up the data by Day & Total Price for "Revenue Generated Week wise"
by_week = order_item.pivot_table(
index=[
order_item["shipping_limit_date"].dt.year,
order_item["shipping_limit_date"].dt.weekday,
],
values="Total_price",
aggfunc="sum",
)
# plotting the above analyis in Bar Graph
fig = by_week.plot(kind="bar", figsize=(20, 12), width=0.90)
y = ["{:,.0f}".format(x) for x in fig.get_yticks()]
fig.set_yticklabels(y)
plt.xlabel("Weeks by Year")
plt.ylabel("Total Revenue")
plt.title("Revenue by Weeks ")
plt.show()
# #### Revenue Generated by Hour of Day
# Grouping up the data by Day & Total Price for "Revenue Generated By Hour of Day"
by_Hour = order_item.pivot_table(
index=order_item["shipping_limit_date"].dt.hour, values="Total_price", aggfunc="sum"
).reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
by_Hour,
x="shipping_limit_date",
y="Total_price",
title="Revenue Generated on Hourly Basis",
color="shipping_limit_date",
width=1200,
height=600,
)
plot.update_layout(xaxis_title="Hour of Day", yaxis_title="Revenue")
plot.show("svg")
# #### By Channel and Payment Method
# Grouping up the data by payment channel & Total revenue for "Total Revenue by Payment Channel"
by_Channel = payment_info.pivot_table(
index=payment_info["payment_type"], values="payment_value", aggfunc="sum"
).reset_index()
# plotting the above analyis in Bar Graph & Pie chart
plot = px.bar(
by_Channel,
x="payment_type",
y="payment_value",
title="Total Revenue by Payment Channel",
width=1000,
height=500,
)
plot.update_layout(xaxis_title="Payment Channel", yaxis_title="Revenue")
plot.show("svg")
fig = px.pie(
by_Channel,
names="payment_type",
values="payment_value",
title="Total Revenue by Payment Channel",
)
fig.show("svg")
# #### Popular Product by Month
# Merging the required Dataset
qty_data = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
# Getting and sorting the data needed
pqr = (
qty_data.groupby(by=[qty_data["shipping_limit_date"].dt.month, "product_id"])[
"order_id"
]
.count()
.reset_index()
)
abc = (
pqr.groupby("shipping_limit_date")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in stacked Bar Graph
plot = px.bar(
abc,
x="shipping_limit_date",
y="order_id",
color="product_id",
title="Top 3 product by Each Month",
width=1400,
height=1000,
)
plot.update_layout(xaxis_title="Month", yaxis_title="Total_count")
plot.show("svg")
# #### Popular Product by seller
# Getting and sorting the data needed
by_seller = (
order_item.groupby(by=["seller_id", "product_id"])["order_id"].count().reset_index()
)
by_seller
# #### Popular Product by Month
prod_Month = (
order_item.groupby(by=[order_item["shipping_limit_date"].dt.month, "product_id"])[
"order_id"
]
.count()
.reset_index()
)
pop_prod_Month = (
prod_Month.groupby("shipping_limit_date")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
pop_prod_Month
# #### Popular Product by State
# Getting and sorting the data needed
prod_state = pd.merge(left=order_item, right=seller_info, on="seller_id", how="left")
pop_prod = (
prod_state.groupby(by=["seller_state", "product_id"])["order_id"]
.count()
.reset_index()
)
pop_prod_state = (
pop_prod.groupby("seller_state")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
pop_prod_state,
x="order_id",
y="product_id",
color="seller_state",
title="Top 3 popular product by State",
width=1200,
height=1000,
orientation="h",
)
plot.update_layout(xaxis_title="Count", yaxis_title="Product", paper_bgcolor="cornsilk")
plot.show("svg")
# #### Popular Product by Product category
# Getting and sorting the data needed
prod_cat = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
xyz = (
prod_cat.groupby(by=["product_category_name", "product_id"])["order_id"]
.count()
.reset_index()
)
pop_prod_cat = (
xyz.groupby("product_category_name")
.apply(lambda x: x.nlargest(3, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
fig = px.bar(
pop_prod_cat,
x="order_id",
y="product_id",
color="product_category_name",
title="Top 3 popular product by Category",
width=1000,
height=1600,
orientation="h",
)
fig.update_layout(xaxis_title="Count", yaxis_title="Product")
fig.update_layout(legend=dict(bgcolor="lightblue"), paper_bgcolor="seashell")
fig.show("svg")
# #### Popular Categories by state
# Getting and sorting the data needed
a = pd.merge(left=prod_state, right=product_info, on="product_id", how="left")
a.drop(
columns=[
"product_photos_qty",
"product_weight_g",
"price",
"freight_value",
"Total_price",
"seller_zip_code_prefix",
"product_name_lenght",
"product_description_lenght",
"product_length_cm",
"product_height_cm",
"product_width_cm",
],
inplace=True,
)
group_data = (
a.groupby(by=["seller_state", "product_category_name"])["order_id"]
.count()
.reset_index()
)
pop_category = (
group_data.groupby("seller_state")
.apply(lambda x: x.nlargest(1, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
pop_category,
x="seller_state",
y="order_id",
color="product_category_name",
title="Most popular Category by State",
width=1000,
height=800,
)
plot.update_layout(xaxis_title="State", yaxis_title="Count", paper_bgcolor="cornsilk")
plot.show("svg")
# #### Popular Categories by Month
# Getting and sorting the data needed
by_month = (
a.groupby(by=[a["shipping_limit_date"].dt.month, "product_category_name"])[
"order_id"
]
.count()
.reset_index()
)
Monthly = (
by_month.groupby("shipping_limit_date")
.apply(lambda x: x.nlargest(2, "order_id"))
.reset_index(drop=True)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Monthly,
x="order_id",
y="shipping_limit_date",
color="product_category_name",
title="Top 2 popular category by Month",
width=1200,
height=800,
orientation="h",
)
plot.update_layout(xaxis_title="Count", yaxis_title="Month", paper_bgcolor="cornsilk")
plot.show("svg")
# #### List top 10 most expensive products sorted by price
# Getting and sorting the data needed
For_Price = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
For_Price.drop(
columns=[
"freight_value",
"price",
"order_id",
"seller_id",
"shipping_limit_date",
"order_item_id",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
],
inplace=True,
)
For_Price.groupby(by=["product_id", "product_category_name"])["Total_price"]
For_Price.nlargest(10, columns="Total_price").reset_index()
# #### Divide the customers into groups based on the revenue generated
# Getting and sorting the data needed
segment = pd.merge(left=order_item, right=order_info, on="order_id", how="left")
segment.drop(
columns=[
"order_item_id",
"seller_id",
"order_status",
"order_purchase_timestamp",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
],
inplace=True,
)
seg_cust = segment.groupby(by="customer_id")["Total_price"].sum().reset_index()
bins = [0, 2000, 4000, 6000, 8000, 10000, 12000, 14000]
labels = [
"0 - 2000",
"2000 - 4000",
"4000 - 6000",
"6000 - 8000",
"8000 - 10000",
"10000 - 12000",
"12000 - 14000",
]
seg_cust["customer_seg"] = pd.cut(
seg_cust.Total_price, bins, labels=labels, include_lowest=True
)
plot_data = seg_cust.groupby(by="customer_seg")["customer_id"].count().reset_index()
plot_data
# #### Divide the Sellers into groups based on the revenue generated
# Getting and sorting the data needed
segment = pd.merge(left=order_item, right=order_info, on="order_id", how="left")
segment.drop(
columns=[
"order_item_id",
"customer_id",
"order_status",
"order_purchase_timestamp",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
],
inplace=True,
)
seg_seller = segment.groupby(by="seller_id")["Total_price"].sum().reset_index()
bins = [0, 50000, 100000, 150000, 200000, 250000]
labels = ["Poor", "Fair", "Good", "Best", "Excellent"]
seg_seller["Seller_seg"] = pd.cut(
seg_seller.Total_price, bins, labels=labels, include_lowest=True
)
plot_data = seg_seller.groupby(by="Seller_seg")["seller_id"].count().reset_index()
plot_data
# plotting the above analyis in Bar Graph
plot = px.bar(
plot_data,
x="seller_id",
y="Seller_seg",
color="Seller_seg",
title="Seller count by Seller Segment",
width=800,
height=600,
orientation="h",
)
plot.update_layout(
xaxis_title="seller Count", yaxis_title="Seller Segment", paper_bgcolor="cornsilk"
)
plot.show("svg")
# #### Cross-Selling (Which products are selling together)
# ##### Hint: We need to find which of the top 10 combinations of products are selling together in each transaction. (combination of 2 or 3 buying together)
# Getting and sorting the data needed
D = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
E = D[D["order_id"].duplicated(keep=False)]
E["Cat_Group"] = E.groupby(by="order_id")["product_category_name"].transform(
lambda x: " & ".join(map(str, x))
)
E = E[["order_id", "Cat_Group"]].drop_duplicates()
F = (
E.groupby(by="Cat_Group")
.count()
.reset_index()
.sort_values("order_id", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
F,
x="Cat_Group",
y="order_id",
color="Cat_Group",
title="Top 10 Product Category Sold Together",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Category Group", yaxis_title="Count", paper_bgcolor="cornsilk"
)
plot.show("svg")
# #### How customers are paying?
# Getting and sorting the data needed
H = payment_info.groupby(by="payment_installments")["order_id"].count().reset_index()
# plotting the above analyis in Bar Graph
plot = px.bar(
H,
x="payment_installments",
y="order_id",
color="payment_installments",
title="payment pattern of customers",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="No. of Installments",
yaxis_title="Count of cust",
paper_bgcolor="cornsilk",
)
plot.show("svg")
# #### Which payment channels are used by most customers?
# Getting and sorting the data needed
H = payment_info.groupby(by="payment_type")["order_id"].count().reset_index()
# plotting the above analyis in Pie Chart
fig = px.pie(
H,
names="payment_type",
values="order_id",
title="Total Revenue by Payment Channel",
color_discrete_sequence=px.colors.sequential.Pinkyl_r,
)
fig.show("svg")
# #### Which categories (top 10) are maximum rated & minimum rated?
# Getting and sorting the data needed
G = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
J = pd.merge(left=G, right=ratings, on="order_id", how="left")
J.drop(
columns=[
"order_item_id",
"seller_id",
"price",
"freight_value",
"Total_price",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
"review_answer_timestamp",
"review_creation_date",
],
inplace=True,
)
J = J.drop_duplicates(subset="review_id", keep="first")
J = J.dropna(axis=0)
# ##### Top 10 by Max ratings
# Getting and sorting the data needed
prod_Max_Rating = J.loc[J["review_score"] == 5]
Top_10 = (
prod_Max_Rating.groupby(by=["product_category_name"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Top_10,
x="product_category_name",
y="review_score",
color="product_category_name",
title="Top 10 category by Max Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Categories",
yaxis_title='Count of Max Rating "5" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# ##### Top 10 by Min ratings
# Getting and sorting the data needed
prod_Min_Rating = J.loc[J["review_score"] == 1]
Bottom_10 = (
prod_Min_Rating.groupby(by=["product_category_name"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Bottom_10,
x="product_category_name",
y="review_score",
color="product_category_name",
title="Top 10 category by Min Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Categories",
yaxis_title='Count of Min Rating "1" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# #### Which products (top10) are maximum rated & minimum rated?
# ##### Top 10 by Max Ratings
# Getting and sorting the data needed
Max_prod_Rating = J.loc[J["review_score"] == 5]
Top_10 = (
Max_prod_Rating.groupby(by=["product_id"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Top_10,
x="product_id",
y="review_score",
color="product_id",
title="Top 10 Product by Max Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="product_id",
yaxis_title='Count of Max Rating "5" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# ##### Top 10 by Min Ratings
# Getting and sorting the data needed
Min_prod_Rating = J.loc[J["review_score"] == 1]
Lowest_10 = (
Min_prod_Rating.groupby(by=["product_id"])["review_score"]
.count()
.reset_index()
.sort_values("review_score", ascending=False)
.head(10)
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Lowest_10,
x="product_id",
y="review_score",
color="product_id",
title="Top 10 Product by Min Rating",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="product_id",
yaxis_title='Count of Min Rating "1" ',
paper_bgcolor="cornsilk",
)
plot.show("svg")
# #### Average rating by location, seller, product, category, month etc.
# ##### Average Ratings by Location
# ###### By City
# Getting and sorting the data needed
L = pd.merge(left=order_info, right=cust_info, on="customer_id", how="left")
N = pd.merge(left=L, right=ratings, on="order_id", how="left")
N.drop(
columns=[
"customer_id",
"order_status",
"order_purchase_timestamp",
"order_approved_at",
"order_delivered_carrier_date",
"order_delivered_customer_date",
"order_estimated_delivery_date",
"customer_unique_id",
"customer_zip_code_prefix",
"review_answer_timestamp",
],
inplace=True,
)
# Getting and sorting the data needed
P = (
N.groupby(by="customer_city")["review_score"]
.mean()
.reset_index()
.sort_values(by="customer_city")
)
P.rename(columns={"customer_city": "City", "review_score": "Avg_review"}, inplace=True)
P
# ###### By State
# Getting and sorting the data needed
Q = (
N.groupby(by="customer_state")["review_score"]
.mean()
.reset_index()
.sort_values(by="customer_state")
)
Q.rename(
columns={"customer_state": "State", "review_score": "Avg_review"}, inplace=True
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Q,
x="Avg_review",
y="State",
color="State",
title="Average Rating by State",
width=1000,
height=700,
orientation="h",
)
plot.update_layout(
xaxis_title="Average Rating", yaxis_title="State", paper_bgcolor="cornsilk"
)
plot.show("svg")
# ##### Average Ratings by Seller
# Getting and sorting the data needed
O = pd.merge(left=order_item, right=ratings, on="order_id", how="left")
W = O.groupby(by="seller_id")["review_score"].mean().reset_index()
W.rename(
columns={"customer_state": "State", "review_score": "Avg_review"}, inplace=True
)
W
# ##### Average Ratings by Product
# Getting and sorting the data needed
S = pd.merge(left=order_item, right=product_info, on="product_id", how="left")
T = pd.merge(left=S, right=ratings, on="order_id", how="left")
T.drop(
columns=[
"order_id",
"order_item_id",
"seller_id",
"price",
"freight_value",
"Total_price",
"shipping_limit_date",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
"review_creation_date",
],
inplace=True,
)
U = T.groupby(by="product_id")["review_score"].mean().reset_index()
U.rename(columns={"product_id": "Product", "review_score": "Avg_review"}, inplace=True)
U
# ##### Average Ratings by Category
# Getting and sorting the data needed
V = T.groupby(by="product_category_name")["review_score"].mean().reset_index()
V.rename(
columns={"product_category_name": "Category", "review_score": "Avg_review"},
inplace=True,
)
# plotting the above analyis in Bar Graph
plot = px.bar(
V,
x="Avg_review",
y="Category",
color="Category",
title="Average rating by category",
width=1000,
height=1200,
)
plot.update_layout(
xaxis_title="Avg_review", yaxis_title="Category ", paper_bgcolor="cornsilk"
)
plot.show("svg")
# ##### Average Ratings by Month
# Getting and sorting the data needed
Z = (
T.groupby(by=T["review_answer_timestamp"].dt.month)["review_score"]
.mean()
.reset_index()
)
Z.rename(
columns={"review_answer_timestamp": "Month", "review_score": "Avg_review"},
inplace=True,
)
# plotting the above analyis in Bar Graph
plot = px.bar(
Z,
x="Month",
y="Avg_review",
color="Month",
title="Average rating by Month",
width=1000,
height=700,
)
plot.update_layout(
xaxis_title="Month", yaxis_title="Avg_review ", paper_bgcolor="cornsilk"
)
plot.show("svg")
| false | 8 | 9,861 | 0 | 9,883 | 9,861 |
||
129424049
|
<jupyter_start><jupyter_text>Wind Power Production US (2001-2023)
# Description
This dataset, provided by the [U.S. Energy Information Administration (EIA)](https://www.eia.gov) in the Electric Power Monthly report, contains [monthly data on wind energy production and other renewables in the United States.](https://www.eia.gov/electricity/data/browser/#/topic/0?agg=2,1,0&fuel=028&geo=vvvvvvvvvvvvo&sec=g&linechart=ELEC.GEN.AOR-US-99.M&columnchart=ELEC.GEN.AOR-US-99.M&map=ELEC.GEN.AOR-US-99.M&freq=M&ctype=linechart<ype=pin&rtype=s&maptype=0&rse=0&pin=)
# Usage / Content
The dataset is a simple .csv file that could be read thanks to *pandas* python package :
- Data Format: CSV
- Data Volume: ~1 MB per month
```python
import pandas as pd
df = pd.read_csv('/kaggle/working/wind-power-production-us/wind-power-production-us.csv')
```
Here is some other informations about the variables available :
- Time Range: January 2001 to the latest month available
- Geographic Coverage: United States
- Granularity: Monthly
- Variables:
- "date": Month and year
- "wind_state_name" : wind power production for the current state
- "other_state_name" : production for all other renewables sources for the current state
# Potential Uses
- Conducting time series analysis to forecast wind energy production and capacity factors
- Performing exploratory data analysis to identify trends and patterns in wind energy production
- Comparing wind energy production to other electricity generation sources to inform policy decisions
- Modeling wind energy production and capacity factors for forecasting and planning purposes
- Evaluating the impact of policy changes on wind energy production in the United States
Kaggle dataset identifier: wind-power-production-us-2001-2023
<jupyter_script>import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("ggplot")
df = pd.read_csv(
"/kaggle/input/wind-power-production-us-2001-2023/wind-power-production-us.csv",
index_col="date",
parse_dates=True,
)
df.info()
df.describe()
wind_colnames = [col for col in list(df.columns) if "wind" in col]
other_colnames = [col for col in list(df.columns) if not "wind" in col]
wind_colnames
other_colnames
plt.figure(figsize=(10, 5))
for var in ["middle_atlantic", "hawaii"]:
plt.plot(df["wind_" + var], label="Wind" + var)
plt.plot(df["other_" + var], label="Others " + var)
plt.legend()
plt.show()
plt.figure(figsize=(10, 5))
for col in ["wind_middle_atlantic", "wind_massachusetts"]:
plt.plot(df[col], label=col.replace("wind_", ""))
plt.legend()
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/424/129424049.ipynb
|
wind-power-production-us-2001-2023
|
henriupton
|
[{"Id": 129424049, "ScriptId": 38481898, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5980139, "CreationDate": "05/13/2023 17:06:47", "VersionNumber": 1.0, "Title": "Wind Power Production - Quick Usage", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 35.0, "LinesInsertedFromPrevious": 35.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185462254, "KernelVersionId": 129424049, "SourceDatasetVersionId": 5677703}]
|
[{"Id": 5677703, "DatasetId": 3263929, "DatasourceVersionId": 5753256, "CreatorUserId": 5980139, "LicenseName": "Other (specified in description)", "CreationDate": "05/13/2023 16:33:32", "VersionNumber": 3.0, "Title": "Wind Power Production US (2001-2023)", "Slug": "wind-power-production-us-2001-2023", "Subtitle": "U.S. Wind Energy Production Dataset", "Description": "# Description\nThis dataset, provided by the [U.S. Energy Information Administration (EIA)](https://www.eia.gov) in the Electric Power Monthly report, contains [monthly data on wind energy production and other renewables in the United States.](https://www.eia.gov/electricity/data/browser/#/topic/0?agg=2,1,0&fuel=028&geo=vvvvvvvvvvvvo&sec=g&linechart=ELEC.GEN.AOR-US-99.M&columnchart=ELEC.GEN.AOR-US-99.M&map=ELEC.GEN.AOR-US-99.M&freq=M&ctype=linechart<ype=pin&rtype=s&maptype=0&rse=0&pin=)\n\n# Usage / Content\n\nThe dataset is a simple .csv file that could be read thanks to *pandas* python package :\n- Data Format: CSV\n- Data Volume: ~1 MB per month\n\n```python\nimport pandas as pd\ndf = pd.read_csv('/kaggle/working/wind-power-production-us/wind-power-production-us.csv')\n```\n\nHere is some other informations about the variables available :\n- Time Range: January 2001 to the latest month available\n- Geographic Coverage: United States\n- Granularity: Monthly\n- Variables: \n - \"date\": Month and year\n - \"wind_state_name\" : wind power production for the current state\n - \"other_state_name\" : production for all other renewables sources for the current state\n\n# Potential Uses\n- Conducting time series analysis to forecast wind energy production and capacity factors\n- Performing exploratory data analysis to identify trends and patterns in wind energy production\n- Comparing wind energy production to other electricity generation sources to inform policy decisions\n- Modeling wind energy production and capacity factors for forecasting and planning purposes\n- Evaluating the impact of policy changes on wind energy production in the United States", "VersionNotes": "Data Update 2023-05-13", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3263929, "CreatorUserId": 5980139, "OwnerUserId": 5980139.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5677703.0, "CurrentDatasourceVersionId": 5753256.0, "ForumId": 3329532, "Type": 2, "CreationDate": "05/13/2023 16:22:42", "LastActivityDate": "05/13/2023", "TotalViews": 4993, "TotalDownloads": 879, "TotalVotes": 25, "TotalKernels": 2}]
|
[{"Id": 5980139, "UserName": "henriupton", "DisplayName": "Henri Upton", "RegisterDate": "10/17/2020", "PerformanceTier": 2}]
|
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("ggplot")
df = pd.read_csv(
"/kaggle/input/wind-power-production-us-2001-2023/wind-power-production-us.csv",
index_col="date",
parse_dates=True,
)
df.info()
df.describe()
wind_colnames = [col for col in list(df.columns) if "wind" in col]
other_colnames = [col for col in list(df.columns) if not "wind" in col]
wind_colnames
other_colnames
plt.figure(figsize=(10, 5))
for var in ["middle_atlantic", "hawaii"]:
plt.plot(df["wind_" + var], label="Wind" + var)
plt.plot(df["other_" + var], label="Others " + var)
plt.legend()
plt.show()
plt.figure(figsize=(10, 5))
for col in ["wind_middle_atlantic", "wind_massachusetts"]:
plt.plot(df[col], label=col.replace("wind_", ""))
plt.legend()
plt.show()
| false | 1 | 289 | 0 | 799 | 289 |
||
129424845
|
# # Contradictory, My Dear Watson
# Natural Language Inferencing (NLI) is a popular NLP problem that involves determining how pairs of sentences (consisting of a premise and a hypothesis) are related.
# The task is to create an NLI model that assigns labels of 0, 1, or 2 (corresponding to entailment, neutral, and contradiction) to pairs of premises and hypotheses. The train and test set include text in fifteen different languages.
# Libraries
import numpy as np # linear algebra
import pandas as pd # read, store and manipulate data
import matplotlib.pyplot as plt # Data visualization
import torch # PyTorch ML framework
# HuggingFace API
from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification
# Reading data
train_data = pd.read_csv("/kaggle/input/contradictory-my-dear-watson/train.csv")
test_data = pd.read_csv("/kaggle/input/contradictory-my-dear-watson/test.csv")
print("Number of rows")
len(train_data.index)
print("Columns and datatypes")
train_data.dtypes
print("Labels")
np.sort(train_data["label"].unique())
# Where
# 0 - Entailment
# 1 - Neutral
# 2 - Contradiction
label_count = train_data["label"].value_counts().sort_index()
labels_name = ["entailment", "neutral", "contradiction"]
label_count.index = labels_name
print("Number of entries per label")
print(label_count)
custom_colors = ["#F8766D", "#00BA38", "#619CFF"]
label_count.plot.bar(color=custom_colors)
plt.xlabel("Label")
plt.ylabel("Count")
plt.title("Number of entries per label")
plt.show()
unique_values = train_data.nunique()
print("Number of languages")
print(unique_values["language"])
language_count = train_data["language"].value_counts(sort=True, ascending=False)
language_count_log = np.log(language_count)
print("Number of entries per language")
print(language_count)
custom_colors = [
"#F8766D",
"#E58700",
"#C99800",
"#A3A500",
"#6BB100",
"#00BA38",
"#00BF7D",
"#00C0AF",
"#00BCD8",
"#00B0F6",
"#619CFF",
"#B983FF",
"#E76BF3",
"#FD61D1",
"#FF67A4",
]
language_count_log.plot.bar(color=custom_colors)
plt.xlabel("Language")
plt.ylabel("Count (log)")
plt.title("Number of entries per language")
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/424/129424845.ipynb
| null | null |
[{"Id": 129424845, "ScriptId": 38420030, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5369634, "CreationDate": "05/13/2023 17:15:15", "VersionNumber": 1.0, "Title": "Contradictory my dear Watson", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 69.0, "LinesInsertedFromPrevious": 69.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# # Contradictory, My Dear Watson
# Natural Language Inferencing (NLI) is a popular NLP problem that involves determining how pairs of sentences (consisting of a premise and a hypothesis) are related.
# The task is to create an NLI model that assigns labels of 0, 1, or 2 (corresponding to entailment, neutral, and contradiction) to pairs of premises and hypotheses. The train and test set include text in fifteen different languages.
# Libraries
import numpy as np # linear algebra
import pandas as pd # read, store and manipulate data
import matplotlib.pyplot as plt # Data visualization
import torch # PyTorch ML framework
# HuggingFace API
from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification
# Reading data
train_data = pd.read_csv("/kaggle/input/contradictory-my-dear-watson/train.csv")
test_data = pd.read_csv("/kaggle/input/contradictory-my-dear-watson/test.csv")
print("Number of rows")
len(train_data.index)
print("Columns and datatypes")
train_data.dtypes
print("Labels")
np.sort(train_data["label"].unique())
# Where
# 0 - Entailment
# 1 - Neutral
# 2 - Contradiction
label_count = train_data["label"].value_counts().sort_index()
labels_name = ["entailment", "neutral", "contradiction"]
label_count.index = labels_name
print("Number of entries per label")
print(label_count)
custom_colors = ["#F8766D", "#00BA38", "#619CFF"]
label_count.plot.bar(color=custom_colors)
plt.xlabel("Label")
plt.ylabel("Count")
plt.title("Number of entries per label")
plt.show()
unique_values = train_data.nunique()
print("Number of languages")
print(unique_values["language"])
language_count = train_data["language"].value_counts(sort=True, ascending=False)
language_count_log = np.log(language_count)
print("Number of entries per language")
print(language_count)
custom_colors = [
"#F8766D",
"#E58700",
"#C99800",
"#A3A500",
"#6BB100",
"#00BA38",
"#00BF7D",
"#00C0AF",
"#00BCD8",
"#00B0F6",
"#619CFF",
"#B983FF",
"#E76BF3",
"#FD61D1",
"#FF67A4",
]
language_count_log.plot.bar(color=custom_colors)
plt.xlabel("Language")
plt.ylabel("Count (log)")
plt.title("Number of entries per language")
plt.show()
| false | 0 | 725 | 0 | 725 | 725 |
||
129424595
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# The extraction of text with pytesseract needs a library to be installed in the system environment. The below commands will help the installation of libraries in your system.
#
# # Extracting text from a grayscale image
from PIL import Image
from pytesseract import pytesseract
image = Image.open("/kaggle/input/text-image/1_xZl-w1g3yDNiPVP5JHPlFA.webp")
image = image.resize((400, 200))
image.save("sample.png")
image
# from IPython.display import Image
#
text = pytesseract.image_to_string(image)
# print the text line by line
print(text[:-1])
# # Detecting and Extracting text from color image
# In this example, we will use OpenCV also to use the bounding box and other methods for OpenCV.
# Install the libraries for this example
import cv2
from pytesseract import pytesseract
image = Image.open("/kaggle/input/colour-image-text/1_Gs6ic7I5WvfJGPlnYfo3nw.webp")
image = image.resize((400, 200))
image.save("sample1.png")
image
# Reading the image with the help of OpenCV method.
img = cv2.imread("/kaggle/input/colour-image-text/1_Gs6ic7I5WvfJGPlnYfo3nw.webp")
# Converting the color image to grayscale image for better text processing.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Now, we will convert the grayscale image to binary image to enhance the chance of text extracting.
# Here, imwrite method is used to save the image in the working directory.
ret, thresh1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
cv2.imwrite("threshold_image.jpg", thresh1)
image = Image.open("/kaggle/working/threshold_image.jpg")
image
text = pytesseract.image_to_string(image)
# print the text line by line
print(text[:-1])
# To get the size of the sentences or even a word from the image, we need a structure element method in OpenCV with the kernel size depending upon the area of the text.
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (12, 12))
# The next step is to use the dilation method on the binary image to get the boundaries of the text.
dilation = cv2.dilate(thresh1, rect_kernel, iterations=3)
cv2.imwrite("dilation_image.jpg", dilation)
image = Image.open("/kaggle/working/dilation_image.jpg")
image = image.resize((400, 200))
image
# We can increase the iteration number, depending on the foreground pixels i.e. white pixels to get the proper shape of the bounding box.
# Now, we will use the find contour method to get the area of the white pixels.
contours, hierarchy = cv2.findContours(
dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE
)
im2 = img.copy()
# Now, it’s time to get the magic to happen on the image. Here we will get the coordinates of the white pixel area and draw the bounding box around it. We will also save the text from the image in the text file.
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Draw the bounding box on the text area
rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Crop the bounding box area
cropped = im2[y : y + h, x : x + w]
cv2.imwrite("rectanglebox.jpg", rect)
# open the text file
file = open("text_output2.txt", "a")
# Using tesseract on the cropped image area to get text
text = pytesseract.image_to_string(cropped)
# Adding the text to the file
file.write(text)
file.write("\n")
# Closing the file
file.close
text
image = Image.open("/kaggle/working/rectanglebox.jpg")
image
myfile = open("/kaggle/working/text_output2.txt")
myfile
# Readlines returns a list of the lines in the file
myfile.seek(0)
myfile.readlines()
crop_number = 0
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Draw the bounding box on the text area
rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Crop the bounding box area
cropped = im2[y : y + h, x : x + w]
cv2.imwrite("crop" + str(crop_number) + ".jpeg", cropped)
crop_number += 1
cv2.imwrite("rectanglebox.jpg", rect)
# open the text file
file = open("text_output2.txt", "a")
# Using tesseract on the cropped image area to get text
text = pytesseract.image_to_string(cropped)
# Adding the text to the file
file.write(text)
file.write("\n")
# Closing the file
file.close
image1 = Image.open("/kaggle/working/crop0.jpeg")
image2 = Image.open("/kaggle/working/crop1.jpeg")
image3 = Image.open("/kaggle/working/crop2.jpeg")
image1
image2
image3
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/424/129424595.ipynb
| null | null |
[{"Id": 129424595, "ScriptId": 38452964, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7424170, "CreationDate": "05/13/2023 17:12:14", "VersionNumber": 1.0, "Title": "Text Detection and Extraction", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 178.0, "LinesInsertedFromPrevious": 178.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# The extraction of text with pytesseract needs a library to be installed in the system environment. The below commands will help the installation of libraries in your system.
#
# # Extracting text from a grayscale image
from PIL import Image
from pytesseract import pytesseract
image = Image.open("/kaggle/input/text-image/1_xZl-w1g3yDNiPVP5JHPlFA.webp")
image = image.resize((400, 200))
image.save("sample.png")
image
# from IPython.display import Image
#
text = pytesseract.image_to_string(image)
# print the text line by line
print(text[:-1])
# # Detecting and Extracting text from color image
# In this example, we will use OpenCV also to use the bounding box and other methods for OpenCV.
# Install the libraries for this example
import cv2
from pytesseract import pytesseract
image = Image.open("/kaggle/input/colour-image-text/1_Gs6ic7I5WvfJGPlnYfo3nw.webp")
image = image.resize((400, 200))
image.save("sample1.png")
image
# Reading the image with the help of OpenCV method.
img = cv2.imread("/kaggle/input/colour-image-text/1_Gs6ic7I5WvfJGPlnYfo3nw.webp")
# Converting the color image to grayscale image for better text processing.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Now, we will convert the grayscale image to binary image to enhance the chance of text extracting.
# Here, imwrite method is used to save the image in the working directory.
ret, thresh1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
cv2.imwrite("threshold_image.jpg", thresh1)
image = Image.open("/kaggle/working/threshold_image.jpg")
image
text = pytesseract.image_to_string(image)
# print the text line by line
print(text[:-1])
# To get the size of the sentences or even a word from the image, we need a structure element method in OpenCV with the kernel size depending upon the area of the text.
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (12, 12))
# The next step is to use the dilation method on the binary image to get the boundaries of the text.
dilation = cv2.dilate(thresh1, rect_kernel, iterations=3)
cv2.imwrite("dilation_image.jpg", dilation)
image = Image.open("/kaggle/working/dilation_image.jpg")
image = image.resize((400, 200))
image
# We can increase the iteration number, depending on the foreground pixels i.e. white pixels to get the proper shape of the bounding box.
# Now, we will use the find contour method to get the area of the white pixels.
contours, hierarchy = cv2.findContours(
dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE
)
im2 = img.copy()
# Now, it’s time to get the magic to happen on the image. Here we will get the coordinates of the white pixel area and draw the bounding box around it. We will also save the text from the image in the text file.
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Draw the bounding box on the text area
rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Crop the bounding box area
cropped = im2[y : y + h, x : x + w]
cv2.imwrite("rectanglebox.jpg", rect)
# open the text file
file = open("text_output2.txt", "a")
# Using tesseract on the cropped image area to get text
text = pytesseract.image_to_string(cropped)
# Adding the text to the file
file.write(text)
file.write("\n")
# Closing the file
file.close
text
image = Image.open("/kaggle/working/rectanglebox.jpg")
image
myfile = open("/kaggle/working/text_output2.txt")
myfile
# Readlines returns a list of the lines in the file
myfile.seek(0)
myfile.readlines()
crop_number = 0
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Draw the bounding box on the text area
rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Crop the bounding box area
cropped = im2[y : y + h, x : x + w]
cv2.imwrite("crop" + str(crop_number) + ".jpeg", cropped)
crop_number += 1
cv2.imwrite("rectanglebox.jpg", rect)
# open the text file
file = open("text_output2.txt", "a")
# Using tesseract on the cropped image area to get text
text = pytesseract.image_to_string(cropped)
# Adding the text to the file
file.write(text)
file.write("\n")
# Closing the file
file.close
image1 = Image.open("/kaggle/working/crop0.jpeg")
image2 = Image.open("/kaggle/working/crop1.jpeg")
image3 = Image.open("/kaggle/working/crop2.jpeg")
image1
image2
image3
| false | 0 | 1,580 | 0 | 1,580 | 1,580 |
||
129424604
|
<jupyter_start><jupyter_text>Eye State Classification EEG Dataset
# Eye State Classification EEG Dataset
All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset.
The duration of the measurement was 117 seconds.
The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames.
Target:
- `1` indicates the eye-closed and
- `0` the eye-open state.
All values are in chronological order with the first measured value at the top of the data.
Reference link: https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State
Kaggle dataset identifier: eye-state-classification-eeg-dataset
<jupyter_code>import pandas as pd
df = pd.read_csv('eye-state-classification-eeg-dataset/EEG_Eye_State_Classification.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 14980 entries, 0 to 14979
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AF3 14980 non-null float64
1 F7 14980 non-null float64
2 F3 14980 non-null float64
3 FC5 14980 non-null float64
4 T7 14980 non-null float64
5 P7 14980 non-null float64
6 O1 14980 non-null float64
7 O2 14980 non-null float64
8 P8 14980 non-null float64
9 T8 14980 non-null float64
10 FC6 14980 non-null float64
11 F4 14980 non-null float64
12 F8 14980 non-null float64
13 AF4 14980 non-null float64
14 eyeDetection 14980 non-null int64
dtypes: float64(14), int64(1)
memory usage: 1.7 MB
<jupyter_text>Examples:
{
"AF3": 4329.23,
"F7": 4009.23,
"F3": 4289.23,
"FC5": 4148.21,
"T7": 4350.26,
"P7": 4586.15,
"O1": 4096.92,
"O2": 4641.03,
"P8": 4222.05,
"T8": 4238.46,
"FC6": 4211.28,
"F4": 4280.51,
"F8": 4635.9,
"AF4": 4393.85,
"eyeDetection": 0.0
}
{
"AF3": 4324.62,
"F7": 4004.62,
"F3": 4293.85,
"FC5": 4148.72,
"T7": 4342.05,
"P7": 4586.67,
"O1": 4097.44,
"O2": 4638.97,
"P8": 4210.77,
"T8": 4226.67,
"FC6": 4207.69,
"F4": 4279.49,
"F8": 4632.82,
"AF4": 4384.1,
"eyeDetection": 0.0
}
{
"AF3": 4327.69,
"F7": 4006.67,
"F3": 4295.38,
"FC5": 4156.41,
"T7": 4336.92,
"P7": 4583.59,
"O1": 4096.92,
"O2": 4630.26,
"P8": 4207.69,
"T8": 4222.05,
"FC6": 4206.67,
"F4": 4282.05,
"F8": 4628.72,
"AF4": 4389.23,
"eyeDetection": 0.0
}
{
"AF3": 4328.72,
"F7": 4011.79,
"F3": 4296.41,
"FC5": 4155.9,
"T7": 4343.59,
"P7": 4582.56,
"O1": 4097.44,
"O2": 4630.77,
"P8": 4217.44,
"T8": 4235.38,
"FC6": 4210.77,
"F4": 4287.69,
"F8": 4632.31,
"AF4": 4396.41,
"eyeDetection": 0.0
}
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, cross_val_score
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
plt.style.use("fivethirtyeight")
colors = ["#011f4b", "#03396c", "#005b96", "#6497b1", "#b3cde0"]
sns.set_palette(sns.color_palette(colors))
# This works have a purpose to show how amazing is the neural waves, through them is possible detect body movements, if a person is confused in a video class or analyse a emotional state. I wish that technology in the future generates advances in the move's assistance technologys.
# If it is possible to detect if the person with one eye is open, then it is also possible to detect whether an arm moves, a leg moves, etc.
# Look at this image, it contains the places where the electrodes are placed on the head.
# 
df = pd.read_csv(
"/kaggle/input/eye-state-classification-eeg-dataset/EEG_Eye_State_Classification.csv"
)
df.head(2)
# checking for null values
df.isnull().sum()
sns.set_theme(style="darkgrid")
for col in df.columns[0:-1]:
plt.figure(figsize=(12, 8))
sns.lineplot(x=df.index, y=col, data=df, hue="eyeDetection")
plt.show()
X = df.iloc[:, :-1] # independent variable
y = np.array(df.iloc[:, -1]) # dependent variable
y = y.reshape(-1, 1) # reshaping data
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
y = scaler.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
print("X train : ", X_train.shape)
print("X test : ", X_test.shape)
print("y train : ", y_train.shape)
print("y test : ", y_test.shape)
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, r2_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier
# Defined object from library classification
LR = LogisticRegression()
DTR = DecisionTreeClassifier()
RFR = RandomForestClassifier()
KNR = KNeighborsClassifier()
MLP = MLPClassifier()
XGB = XGBClassifier()
SVR = SVC()
# make for loop for classification
li = [LR, DTR, RFR, KNR, MLP, KNR, XGB, SVR]
d = {}
for i in li:
i.fit(X_train, y_train)
ypred = i.predict(X_test)
print(i, ":", accuracy_score(y_test, ypred) * 100)
d.update({str(i): i.score(X_test, y_test) * 100})
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/424/129424604.ipynb
|
eye-state-classification-eeg-dataset
|
robikscube
|
[{"Id": 129424604, "ScriptId": 38031682, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6329246, "CreationDate": "05/13/2023 17:12:18", "VersionNumber": 1.0, "Title": "Open Eyes/Locked", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 80.0, "LinesInsertedFromPrevious": 80.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185463288, "KernelVersionId": 129424604, "SourceDatasetVersionId": 3865529}]
|
[{"Id": 3865529, "DatasetId": 2298091, "DatasourceVersionId": 3920468, "CreatorUserId": 644036, "LicenseName": "CC0: Public Domain", "CreationDate": "06/26/2022 13:40:41", "VersionNumber": 1.0, "Title": "Eye State Classification EEG Dataset", "Slug": "eye-state-classification-eeg-dataset", "Subtitle": "Predict eye-open state from EEG Neuroheadset data", "Description": "# Eye State Classification EEG Dataset\n\nAll data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset.\n\nThe duration of the measurement was 117 seconds.\n\nThe eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames.\n\nTarget:\n- `1` indicates the eye-closed and\n- `0` the eye-open state.\n\nAll values are in chronological order with the first measured value at the top of the data.\n\nReference link: https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 2298091, "CreatorUserId": 644036, "OwnerUserId": 644036.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3865529.0, "CurrentDatasourceVersionId": 3920468.0, "ForumId": 2324728, "Type": 2, "CreationDate": "06/26/2022 13:40:41", "LastActivityDate": "06/26/2022", "TotalViews": 14473, "TotalDownloads": 1398, "TotalVotes": 62, "TotalKernels": 16}]
|
[{"Id": 644036, "UserName": "robikscube", "DisplayName": "Rob Mulla", "RegisterDate": "06/18/2016", "PerformanceTier": 4}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, cross_val_score
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
plt.style.use("fivethirtyeight")
colors = ["#011f4b", "#03396c", "#005b96", "#6497b1", "#b3cde0"]
sns.set_palette(sns.color_palette(colors))
# This works have a purpose to show how amazing is the neural waves, through them is possible detect body movements, if a person is confused in a video class or analyse a emotional state. I wish that technology in the future generates advances in the move's assistance technologys.
# If it is possible to detect if the person with one eye is open, then it is also possible to detect whether an arm moves, a leg moves, etc.
# Look at this image, it contains the places where the electrodes are placed on the head.
# 
df = pd.read_csv(
"/kaggle/input/eye-state-classification-eeg-dataset/EEG_Eye_State_Classification.csv"
)
df.head(2)
# checking for null values
df.isnull().sum()
sns.set_theme(style="darkgrid")
for col in df.columns[0:-1]:
plt.figure(figsize=(12, 8))
sns.lineplot(x=df.index, y=col, data=df, hue="eyeDetection")
plt.show()
X = df.iloc[:, :-1] # independent variable
y = np.array(df.iloc[:, -1]) # dependent variable
y = y.reshape(-1, 1) # reshaping data
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
y = scaler.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
print("X train : ", X_train.shape)
print("X test : ", X_test.shape)
print("y train : ", y_train.shape)
print("y test : ", y_test.shape)
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, r2_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier
# Defined object from library classification
LR = LogisticRegression()
DTR = DecisionTreeClassifier()
RFR = RandomForestClassifier()
KNR = KNeighborsClassifier()
MLP = MLPClassifier()
XGB = XGBClassifier()
SVR = SVC()
# make for loop for classification
li = [LR, DTR, RFR, KNR, MLP, KNR, XGB, SVR]
d = {}
for i in li:
i.fit(X_train, y_train)
ypred = i.predict(X_test)
print(i, ":", accuracy_score(y_test, ypred) * 100)
d.update({str(i): i.score(X_test, y_test) * 100})
|
[{"eye-state-classification-eeg-dataset/EEG_Eye_State_Classification.csv": {"column_names": "[\"AF3\", \"F7\", \"F3\", \"FC5\", \"T7\", \"P7\", \"O1\", \"O2\", \"P8\", \"T8\", \"FC6\", \"F4\", \"F8\", \"AF4\", \"eyeDetection\"]", "column_data_types": "{\"AF3\": \"float64\", \"F7\": \"float64\", \"F3\": \"float64\", \"FC5\": \"float64\", \"T7\": \"float64\", \"P7\": \"float64\", \"O1\": \"float64\", \"O2\": \"float64\", \"P8\": \"float64\", \"T8\": \"float64\", \"FC6\": \"float64\", \"F4\": \"float64\", \"F8\": \"float64\", \"AF4\": \"float64\", \"eyeDetection\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 14980 entries, 0 to 14979\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 AF3 14980 non-null float64\n 1 F7 14980 non-null float64\n 2 F3 14980 non-null float64\n 3 FC5 14980 non-null float64\n 4 T7 14980 non-null float64\n 5 P7 14980 non-null float64\n 6 O1 14980 non-null float64\n 7 O2 14980 non-null float64\n 8 P8 14980 non-null float64\n 9 T8 14980 non-null float64\n 10 FC6 14980 non-null float64\n 11 F4 14980 non-null float64\n 12 F8 14980 non-null float64\n 13 AF4 14980 non-null float64\n 14 eyeDetection 14980 non-null int64 \ndtypes: float64(14), int64(1)\nmemory usage: 1.7 MB\n", "summary": "{\"AF3\": {\"count\": 14980.0, \"mean\": 4321.917777036048, \"std\": 2492.0721742650967, \"min\": 1030.77, \"25%\": 4280.51, \"50%\": 4294.36, \"75%\": 4311.79, \"max\": 309231.0}, \"F7\": {\"count\": 14980.0, \"mean\": 4009.767693591455, \"std\": 45.94167248479191, \"min\": 2830.77, \"25%\": 3990.77, \"50%\": 4005.64, \"75%\": 4023.08, \"max\": 7804.62}, \"F3\": {\"count\": 14980.0, \"mean\": 4264.0224325767695, \"std\": 44.428051757419404, \"min\": 1040.0, \"25%\": 4250.26, \"50%\": 4262.56, \"75%\": 4270.77, \"max\": 6880.51}, \"FC5\": {\"count\": 14980.0, \"mean\": 4164.946326435247, \"std\": 5216.404632299904, \"min\": 2453.33, \"25%\": 4108.21, \"50%\": 4120.51, \"75%\": 4132.31, \"max\": 642564.0}, \"T7\": {\"count\": 14980.0, \"mean\": 4341.741075433912, \"std\": 34.73882081848662, \"min\": 2089.74, \"25%\": 4331.79, \"50%\": 4338.97, \"75%\": 4347.18, \"max\": 6474.36}, \"P7\": {\"count\": 14980.0, \"mean\": 4644.02237917223, \"std\": 2924.789537325123, \"min\": 2768.21, \"25%\": 4611.79, \"50%\": 4617.95, \"75%\": 4626.67, \"max\": 362564.0}, \"O1\": {\"count\": 14980.0, \"mean\": 4110.400159546061, \"std\": 4600.926542533788, \"min\": 2086.15, \"25%\": 4057.95, \"50%\": 4070.26, \"75%\": 4083.59, \"max\": 567179.0}, \"O2\": {\"count\": 14980.0, \"mean\": 4616.0569038718295, \"std\": 29.292603201776213, \"min\": 4567.18, \"25%\": 4604.62, \"50%\": 4613.33, \"75%\": 4624.1, \"max\": 7264.1}, \"P8\": {\"count\": 14980.0, \"mean\": 4218.826610146863, \"std\": 2136.408522887388, \"min\": 1357.95, \"25%\": 4190.77, \"50%\": 4199.49, \"75%\": 4209.23, \"max\": 265641.0}, \"T8\": {\"count\": 14980.0, \"mean\": 4231.316199599466, \"std\": 38.050902621216295, \"min\": 1816.41, \"25%\": 4220.51, \"50%\": 4229.23, \"75%\": 4239.49, \"max\": 6674.36}, \"FC6\": {\"count\": 14980.0, \"mean\": 4202.456899866489, \"std\": 37.78598137403711, \"min\": 3273.33, \"25%\": 4190.26, \"50%\": 4200.51, \"75%\": 4211.28, \"max\": 6823.08}, \"F4\": {\"count\": 14980.0, \"mean\": 4279.232774365822, \"std\": 41.544311516664244, \"min\": 2257.95, \"25%\": 4267.69, \"50%\": 4276.92, \"75%\": 4287.18, \"max\": 7002.56}, \"F8\": {\"count\": 14980.0, \"mean\": 4615.205335560748, \"std\": 1208.3699582560432, \"min\": 86.6667, \"25%\": 4590.77, \"50%\": 4603.08, \"75%\": 4617.44, \"max\": 152308.0}, \"AF4\": {\"count\": 14980.0, \"mean\": 4416.435832443258, \"std\": 5891.285042523523, \"min\": 1366.15, \"25%\": 4342.05, \"50%\": 4354.87, \"75%\": 4372.82, \"max\": 715897.0}, \"eyeDetection\": {\"count\": 14980.0, \"mean\": 0.4487983978638184, \"std\": 0.4973880888730353, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}}", "examples": "{\"AF3\":{\"0\":4329.23,\"1\":4324.62,\"2\":4327.69,\"3\":4328.72},\"F7\":{\"0\":4009.23,\"1\":4004.62,\"2\":4006.67,\"3\":4011.79},\"F3\":{\"0\":4289.23,\"1\":4293.85,\"2\":4295.38,\"3\":4296.41},\"FC5\":{\"0\":4148.21,\"1\":4148.72,\"2\":4156.41,\"3\":4155.9},\"T7\":{\"0\":4350.26,\"1\":4342.05,\"2\":4336.92,\"3\":4343.59},\"P7\":{\"0\":4586.15,\"1\":4586.67,\"2\":4583.59,\"3\":4582.56},\"O1\":{\"0\":4096.92,\"1\":4097.44,\"2\":4096.92,\"3\":4097.44},\"O2\":{\"0\":4641.03,\"1\":4638.97,\"2\":4630.26,\"3\":4630.77},\"P8\":{\"0\":4222.05,\"1\":4210.77,\"2\":4207.69,\"3\":4217.44},\"T8\":{\"0\":4238.46,\"1\":4226.67,\"2\":4222.05,\"3\":4235.38},\"FC6\":{\"0\":4211.28,\"1\":4207.69,\"2\":4206.67,\"3\":4210.77},\"F4\":{\"0\":4280.51,\"1\":4279.49,\"2\":4282.05,\"3\":4287.69},\"F8\":{\"0\":4635.9,\"1\":4632.82,\"2\":4628.72,\"3\":4632.31},\"AF4\":{\"0\":4393.85,\"1\":4384.1,\"2\":4389.23,\"3\":4396.41},\"eyeDetection\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0}}"}}]
| true | 1 |
<start_data_description><data_path>eye-state-classification-eeg-dataset/EEG_Eye_State_Classification.csv:
<column_names>
['AF3', 'F7', 'F3', 'FC5', 'T7', 'P7', 'O1', 'O2', 'P8', 'T8', 'FC6', 'F4', 'F8', 'AF4', 'eyeDetection']
<column_types>
{'AF3': 'float64', 'F7': 'float64', 'F3': 'float64', 'FC5': 'float64', 'T7': 'float64', 'P7': 'float64', 'O1': 'float64', 'O2': 'float64', 'P8': 'float64', 'T8': 'float64', 'FC6': 'float64', 'F4': 'float64', 'F8': 'float64', 'AF4': 'float64', 'eyeDetection': 'int64'}
<dataframe_Summary>
{'AF3': {'count': 14980.0, 'mean': 4321.917777036048, 'std': 2492.0721742650967, 'min': 1030.77, '25%': 4280.51, '50%': 4294.36, '75%': 4311.79, 'max': 309231.0}, 'F7': {'count': 14980.0, 'mean': 4009.767693591455, 'std': 45.94167248479191, 'min': 2830.77, '25%': 3990.77, '50%': 4005.64, '75%': 4023.08, 'max': 7804.62}, 'F3': {'count': 14980.0, 'mean': 4264.0224325767695, 'std': 44.428051757419404, 'min': 1040.0, '25%': 4250.26, '50%': 4262.56, '75%': 4270.77, 'max': 6880.51}, 'FC5': {'count': 14980.0, 'mean': 4164.946326435247, 'std': 5216.404632299904, 'min': 2453.33, '25%': 4108.21, '50%': 4120.51, '75%': 4132.31, 'max': 642564.0}, 'T7': {'count': 14980.0, 'mean': 4341.741075433912, 'std': 34.73882081848662, 'min': 2089.74, '25%': 4331.79, '50%': 4338.97, '75%': 4347.18, 'max': 6474.36}, 'P7': {'count': 14980.0, 'mean': 4644.02237917223, 'std': 2924.789537325123, 'min': 2768.21, '25%': 4611.79, '50%': 4617.95, '75%': 4626.67, 'max': 362564.0}, 'O1': {'count': 14980.0, 'mean': 4110.400159546061, 'std': 4600.926542533788, 'min': 2086.15, '25%': 4057.95, '50%': 4070.26, '75%': 4083.59, 'max': 567179.0}, 'O2': {'count': 14980.0, 'mean': 4616.0569038718295, 'std': 29.292603201776213, 'min': 4567.18, '25%': 4604.62, '50%': 4613.33, '75%': 4624.1, 'max': 7264.1}, 'P8': {'count': 14980.0, 'mean': 4218.826610146863, 'std': 2136.408522887388, 'min': 1357.95, '25%': 4190.77, '50%': 4199.49, '75%': 4209.23, 'max': 265641.0}, 'T8': {'count': 14980.0, 'mean': 4231.316199599466, 'std': 38.050902621216295, 'min': 1816.41, '25%': 4220.51, '50%': 4229.23, '75%': 4239.49, 'max': 6674.36}, 'FC6': {'count': 14980.0, 'mean': 4202.456899866489, 'std': 37.78598137403711, 'min': 3273.33, '25%': 4190.26, '50%': 4200.51, '75%': 4211.28, 'max': 6823.08}, 'F4': {'count': 14980.0, 'mean': 4279.232774365822, 'std': 41.544311516664244, 'min': 2257.95, '25%': 4267.69, '50%': 4276.92, '75%': 4287.18, 'max': 7002.56}, 'F8': {'count': 14980.0, 'mean': 4615.205335560748, 'std': 1208.3699582560432, 'min': 86.6667, '25%': 4590.77, '50%': 4603.08, '75%': 4617.44, 'max': 152308.0}, 'AF4': {'count': 14980.0, 'mean': 4416.435832443258, 'std': 5891.285042523523, 'min': 1366.15, '25%': 4342.05, '50%': 4354.87, '75%': 4372.82, 'max': 715897.0}, 'eyeDetection': {'count': 14980.0, 'mean': 0.4487983978638184, 'std': 0.4973880888730353, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}}
<dataframe_info>
RangeIndex: 14980 entries, 0 to 14979
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AF3 14980 non-null float64
1 F7 14980 non-null float64
2 F3 14980 non-null float64
3 FC5 14980 non-null float64
4 T7 14980 non-null float64
5 P7 14980 non-null float64
6 O1 14980 non-null float64
7 O2 14980 non-null float64
8 P8 14980 non-null float64
9 T8 14980 non-null float64
10 FC6 14980 non-null float64
11 F4 14980 non-null float64
12 F8 14980 non-null float64
13 AF4 14980 non-null float64
14 eyeDetection 14980 non-null int64
dtypes: float64(14), int64(1)
memory usage: 1.7 MB
<some_examples>
{'AF3': {'0': 4329.23, '1': 4324.62, '2': 4327.69, '3': 4328.72}, 'F7': {'0': 4009.23, '1': 4004.62, '2': 4006.67, '3': 4011.79}, 'F3': {'0': 4289.23, '1': 4293.85, '2': 4295.38, '3': 4296.41}, 'FC5': {'0': 4148.21, '1': 4148.72, '2': 4156.41, '3': 4155.9}, 'T7': {'0': 4350.26, '1': 4342.05, '2': 4336.92, '3': 4343.59}, 'P7': {'0': 4586.15, '1': 4586.67, '2': 4583.59, '3': 4582.56}, 'O1': {'0': 4096.92, '1': 4097.44, '2': 4096.92, '3': 4097.44}, 'O2': {'0': 4641.03, '1': 4638.97, '2': 4630.26, '3': 4630.77}, 'P8': {'0': 4222.05, '1': 4210.77, '2': 4207.69, '3': 4217.44}, 'T8': {'0': 4238.46, '1': 4226.67, '2': 4222.05, '3': 4235.38}, 'FC6': {'0': 4211.28, '1': 4207.69, '2': 4206.67, '3': 4210.77}, 'F4': {'0': 4280.51, '1': 4279.49, '2': 4282.05, '3': 4287.69}, 'F8': {'0': 4635.9, '1': 4632.82, '2': 4628.72, '3': 4632.31}, 'AF4': {'0': 4393.85, '1': 4384.1, '2': 4389.23, '3': 4396.41}, 'eyeDetection': {'0': 0, '1': 0, '2': 0, '3': 0}}
<end_description>
| 931 | 0 | 2,355 | 931 |
129453674
|
<jupyter_start><jupyter_text>California Housing
Kaggle dataset identifier: california-housing
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
housing = pd.read_csv("/kaggle/input/california-housing/california housing.csv")
housing
housing.tail(5)
housing["ocean_proximity"].value_counts()
housing_shuffled = housing.sample(n=len(housing), random_state=1)
housing_shuffled
pd.get_dummies(housing_shuffled["ocean_proximity"])
housing_shuffled.drop("ocean_proximity", axis=1)
housing_pd = pd.concat(
[
housing_shuffled.drop("ocean_proximity", axis=1),
pd.get_dummies(housing_shuffled["ocean_proximity"]),
],
axis=1,
)
housing_pd
housing_pd_final = housing_pd[
[
"longitude",
"latitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income",
"<1H OCEAN",
"INLAND",
"ISLAND",
"NEAR BAY",
"NEAR OCEAN",
"median_house_value",
]
]
housing_pd_final.head(5)
housing_pd_final = housing_pd_final.dropna()
len(housing_pd_final)
train_pd, test_pd, val_pd = (
housing_pd_final[:18000],
housing_pd_final[18000:19217],
housing_pd_final[19215:],
)
len(train_pd), len(test_pd), len(val_pd)
X_train, y_train = train_pd.to_numpy()[:, :-1], train_pd.to_numpy()[:, -1]
X_train
X_train.shape
X_val, y_val = val_pd.to_numpy()[:, :-1], val_pd.to_numpy()[:, -1]
X_test, y_test = test_pd.to_numpy()[:, :-1], test_pd.to_numpy()[:, -1]
X_train.shape, X_test.shape, X_val.shape, y_val.shape, X_test.shape, y_test.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train, X_test, X_val = (
scaler.fit_transform(X_train),
scaler.fit_transform(X_test),
scaler.fit_transform(X_val),
)
X_train.shape, X_test.shape, X_val.shape
from sklearn.metrics import mean_squared_error as mse
from sklearn.linear_model import LinearRegression
lm = LinearRegression().fit(X_train, y_train)
mse(lm.predict(X_train), y_train, squared=False), mse(
lm.predict(X_val), y_val, squared=False
)
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=11).fit(X_train, y_train)
mse(knn.predict(X_train), y_train, squared=False), mse(
knn.predict(X_val), y_val, squared=False
)
from sklearn.ensemble import RandomForestRegressor
rf_clf = RandomForestRegressor(max_depth=9).fit(X_train, y_train)
mse(rf_clf.predict(X_train), y_train, squared=False), mse(
rf_clf.predict(X_val), y_val, squared=False
)
from sklearn.ensemble import GradientBoostingRegressor
gbc = GradientBoostingRegressor(n_estimators=500, max_depth=2).fit(X_train, y_train)
mse(gbc.predict(X_train), y_train, squared=False), mse(
gbc.predict(X_val), y_val, squared=False
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.metrics import RootMeanSquaredError
from tensorflow.keras.optimizers import Adam
simple_nn = Sequential()
simple_nn.add(InputLayer((13,)))
simple_nn.add(Dense(2, "relu"))
simple_nn.add(Dense(1, "linear"))
opt = Adam(learning_rate=0.1)
cp = ModelCheckpoint("models/simple_nn", save_best_only=True)
simple_nn.compile(optimizer=opt, loss="mse", metrics=[RootMeanSquaredError()])
simple_nn.fit(
x=X_train, y=y_train, validation_data=(X_val, y_val), callbacks=[cp], epochs=10
)
from tensorflow.keras.models import load_model
simple_nn = load_model("models/simple_nn/")
mse(simple_nn.predict(X_train), y_train, squared=False), mse(
simple_nn.predict(X_val), y_val, squared=False
)
large_nn = Sequential()
large_nn.add(InputLayer((13,)))
large_nn.add(Dense(256, "relu"))
large_nn.add(Dense(128, "relu"))
large_nn.add(Dense(64, "relu"))
large_nn.add(Dense(32, "relu"))
large_nn.add(Dense(1, "linear"))
opt = Adam(learning_rate=0.1)
cp = ModelCheckpoint("models/simple_nn", save_best_only=True)
large_nn.compile(optimizer=opt, loss="mse", metrics=[RootMeanSquaredError()])
large_nn.fit(x=X_train, y=y_train, validation_data=(X_val, y_val), epochs=20)
mse(large_nn.predict(X_train), y_train, squared=False), mse(
large_nn.predict(X_val), y_val, squared=False
)
mse(gbc.predict(X_test), y_test, squared=False)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/453/129453674.ipynb
|
california-housing
|
amritraj1296
|
[{"Id": 129453674, "ScriptId": 38491681, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14659989, "CreationDate": "05/14/2023 00:59:38", "VersionNumber": 1.0, "Title": "notebook775f63ef11", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 130.0, "LinesInsertedFromPrevious": 130.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185519610, "KernelVersionId": 129453674, "SourceDatasetVersionId": 5679363}]
|
[{"Id": 5679363, "DatasetId": 3264931, "DatasourceVersionId": 5754914, "CreatorUserId": 14659989, "LicenseName": "Unknown", "CreationDate": "05/14/2023 00:51:02", "VersionNumber": 1.0, "Title": "California Housing", "Slug": "california-housing", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3264931, "CreatorUserId": 14659989, "OwnerUserId": 14659989.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5679363.0, "CurrentDatasourceVersionId": 5754914.0, "ForumId": 3330536, "Type": 2, "CreationDate": "05/14/2023 00:51:02", "LastActivityDate": "05/14/2023", "TotalViews": 122, "TotalDownloads": 2, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 14659989, "UserName": "amritraj1296", "DisplayName": "Amritraj1296", "RegisterDate": "04/16/2023", "PerformanceTier": 0}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
housing = pd.read_csv("/kaggle/input/california-housing/california housing.csv")
housing
housing.tail(5)
housing["ocean_proximity"].value_counts()
housing_shuffled = housing.sample(n=len(housing), random_state=1)
housing_shuffled
pd.get_dummies(housing_shuffled["ocean_proximity"])
housing_shuffled.drop("ocean_proximity", axis=1)
housing_pd = pd.concat(
[
housing_shuffled.drop("ocean_proximity", axis=1),
pd.get_dummies(housing_shuffled["ocean_proximity"]),
],
axis=1,
)
housing_pd
housing_pd_final = housing_pd[
[
"longitude",
"latitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income",
"<1H OCEAN",
"INLAND",
"ISLAND",
"NEAR BAY",
"NEAR OCEAN",
"median_house_value",
]
]
housing_pd_final.head(5)
housing_pd_final = housing_pd_final.dropna()
len(housing_pd_final)
train_pd, test_pd, val_pd = (
housing_pd_final[:18000],
housing_pd_final[18000:19217],
housing_pd_final[19215:],
)
len(train_pd), len(test_pd), len(val_pd)
X_train, y_train = train_pd.to_numpy()[:, :-1], train_pd.to_numpy()[:, -1]
X_train
X_train.shape
X_val, y_val = val_pd.to_numpy()[:, :-1], val_pd.to_numpy()[:, -1]
X_test, y_test = test_pd.to_numpy()[:, :-1], test_pd.to_numpy()[:, -1]
X_train.shape, X_test.shape, X_val.shape, y_val.shape, X_test.shape, y_test.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train, X_test, X_val = (
scaler.fit_transform(X_train),
scaler.fit_transform(X_test),
scaler.fit_transform(X_val),
)
X_train.shape, X_test.shape, X_val.shape
from sklearn.metrics import mean_squared_error as mse
from sklearn.linear_model import LinearRegression
lm = LinearRegression().fit(X_train, y_train)
mse(lm.predict(X_train), y_train, squared=False), mse(
lm.predict(X_val), y_val, squared=False
)
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=11).fit(X_train, y_train)
mse(knn.predict(X_train), y_train, squared=False), mse(
knn.predict(X_val), y_val, squared=False
)
from sklearn.ensemble import RandomForestRegressor
rf_clf = RandomForestRegressor(max_depth=9).fit(X_train, y_train)
mse(rf_clf.predict(X_train), y_train, squared=False), mse(
rf_clf.predict(X_val), y_val, squared=False
)
from sklearn.ensemble import GradientBoostingRegressor
gbc = GradientBoostingRegressor(n_estimators=500, max_depth=2).fit(X_train, y_train)
mse(gbc.predict(X_train), y_train, squared=False), mse(
gbc.predict(X_val), y_val, squared=False
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.metrics import RootMeanSquaredError
from tensorflow.keras.optimizers import Adam
simple_nn = Sequential()
simple_nn.add(InputLayer((13,)))
simple_nn.add(Dense(2, "relu"))
simple_nn.add(Dense(1, "linear"))
opt = Adam(learning_rate=0.1)
cp = ModelCheckpoint("models/simple_nn", save_best_only=True)
simple_nn.compile(optimizer=opt, loss="mse", metrics=[RootMeanSquaredError()])
simple_nn.fit(
x=X_train, y=y_train, validation_data=(X_val, y_val), callbacks=[cp], epochs=10
)
from tensorflow.keras.models import load_model
simple_nn = load_model("models/simple_nn/")
mse(simple_nn.predict(X_train), y_train, squared=False), mse(
simple_nn.predict(X_val), y_val, squared=False
)
large_nn = Sequential()
large_nn.add(InputLayer((13,)))
large_nn.add(Dense(256, "relu"))
large_nn.add(Dense(128, "relu"))
large_nn.add(Dense(64, "relu"))
large_nn.add(Dense(32, "relu"))
large_nn.add(Dense(1, "linear"))
opt = Adam(learning_rate=0.1)
cp = ModelCheckpoint("models/simple_nn", save_best_only=True)
large_nn.compile(optimizer=opt, loss="mse", metrics=[RootMeanSquaredError()])
large_nn.fit(x=X_train, y=y_train, validation_data=(X_val, y_val), epochs=20)
mse(large_nn.predict(X_train), y_train, squared=False), mse(
large_nn.predict(X_val), y_val, squared=False
)
mse(gbc.predict(X_test), y_test, squared=False)
| false | 1 | 1,654 | 0 | 1,675 | 1,654 |
||
129453736
|
import numpy as np
import pandas as pd
import zipfile
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_log_error
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso, Ridge, ElasticNet
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
#
# Esse é um trecho de código Python que importa as bibliotecas necessárias para trabalhar com análise de dados e aprendizado de máquina, define algumas funções de avaliação de modelos e carrega um conjunto de dados de um diretório específico (no caso, '/kaggle/input') usando um loop 'for'. As bibliotecas importadas incluem o numpy, pandas, sklearn e os modelos de regressão linear Lasso, Ridge e ElasticNet.
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/train.csv.zip")
z.extractall()
t = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/test.csv.zip")
t.extractall()
# Descompacta arquivos
df_train = pd.read_csv("/kaggle/working/train.csv")
df_test = pd.read_csv("/kaggle/working/test.csv")
# Carrega arquivos
num_linhas_train = len(df_train)
print("Quantidade de linhas do arquivo 'train.csv'=", num_linhas_train)
num_linhas_test = len(df_test)
print("\nQuantidade de linhas do arquivo 'test.csv'=", num_linhas_test)
dif_linhas = num_linhas_train - num_linhas_test
print("\nDiferença de quantidade de linhas =", dif_linhas)
# Este código em Python está calculando o número de linhas de dois arquivos diferentes, 'train.csv' e 'test.csv'. Primeiro, ele armazena o número de linhas do arquivo 'train.csv' em 'num_linhas_train' usando a função 'len()'. Em seguida, ele imprime o número de linhas do arquivo 'train.csv'. Ele faz o mesmo para o arquivo 'test.csv' e armazena o número de linhas em 'num_linhas_test'. Finalmente, ele calcula a diferença entre as duas quantidades de linhas e a armazena em 'dif_linhas' e imprime o resultado.
df_train.head()
# Esse código Python exibe as primeiras linhas do conjunto de dados 'df_train' em um formato de tabela . O método 'head()' retorna por padrão as primeiras 5 linhas do conjunto de dados.
df_test.head()
#
# Esse código Python exibe as primeiras linhas do conjunto de dados 'df_test' em um formato de tabela. O método 'head()' retorna por padrão as primeiras 5 linhas do conjunto de dados.
for f in df_train.columns:
if df_train[f].dtype == "object":
lbl = LabelEncoder()
lbl.fit(list(df_train[f].values))
df_train[f] = lbl.transform(list(df_train[f].values))
df_train.head()
for col in df_train.columns:
if df_train[col].isnull().sum() > 0:
mean = df_train[col].mean()
df_train[col] = df_train[col].fillna(mean)
# Esse código aplica a codificação de rótulos em colunas categóricas do conjunto de dados de treino, transformando valores categóricos em valores numéricos.
df_train.head()
X = df_train[
["full_sq", "life_sq", "floor", "max_floor", "material", "build_year", "num_room"]
]
y = np.log(df_train.price_doc)
# Ajusta as variaveis para o machine learn conseguir utilizar
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
# Cria um conjunto de treino e teste
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# aplica uma padronização ('StandardScaler') aos dados de entrada 'X_train' e 'X_test' do conjunto de dados de treinamento. O método 'fit' ajusta a escala dos dados usando a média e o desvio padrão do conjunto de treino e o método 'transform' aplica a mesma escala aos conjuntos de treino e teste. Isso ajuda a garantir que os dados de entrada estejam na mesma escala antes de serem usados para treinar o modelo.
modelo = Lasso()
modelo.fit(X_train, y_train)
# efine um modelo de regressão linear com regularização, e ajusta o modelo aos conjuntos de treinamento 'X_train' e 'y_train' usando o método 'fit'. A regularização L1 ajuda a eliminar características menos importantes do conjunto de dados, o que ajuda a evitar o overfitting (sobreajuste) do modelo.
modelo = LinearRegression()
modelo.fit(X_train, y_train)
# usada para prever valores contínuos de uma variável alvo a partir de um conjunto de variáveis
modelo = ElasticNet()
modelo.fit(X_train, y_train)
# Essa regularização é útil para evitar o overfitting em conjuntos de dados com muitas variáveis preditoras.
modelo = Ridge()
modelo.fit(X_train, y_train)
# A combinação dessas duas regularizações permite que o modelo se adapte bem a conjuntos de dados com muitas características e poucas observações.
modelo.coef_, modelo.intercept_
# Esses comandos retornam os coeficientes e o intercepto da regressão linear ajustada pelo modelo treinado.
y_pred = modelo.predict(X_train)
mae = mean_absolute_error(y_train, y_pred)
mape = mean_absolute_percentage_error(y_train, y_pred)
rmse = mean_squared_error(y_train, y_pred) ** 0.5
r2 = r2_score(y_train, y_pred)
rmsle = np.sqrt(mean_squared_log_error(y_train, y_pred))
print("MAE:", mae)
print("MAPE:", mape)
print("RMSE:", rmse)
print("R2:", r2)
print("RMSLE:", rmsle)
print("")
# Esses comandos realizam a avaliação de desempenho do modelo de regressão linear, comparando as previsões feitas pelo modelo treinado
y_pred = modelo.predict(X_test)
mae = mean_absolute_error(y_test, y_pred)
mape = mean_absolute_percentage_error(y_test, y_pred)
rmse = mean_squared_error(y_test, y_pred) ** 0.5
r2 = r2_score(y_test, y_pred)
rmsle = np.sqrt(mean_squared_log_error(y_test, y_pred))
print("MAE:", mae)
print("MAPE:", mape)
print("RMSE:", rmse)
print("R2:", r2)
print("RMSLE:", rmsle)
# Esses comandos realizam a avaliação de desempenho do modelo de regressão linear, comparando as previsões feitas pelo modelo treinado
for f in df_test.columns:
if df_test[f].dtype == "object":
lbl = LabelEncoder()
lbl.fit(list(df_test[f].values))
df_test[f] = lbl.transform(list(df_test[f].values))
# Repete ação feita no treino no teste
for col in df_test.columns:
if df_test[col].isnull().sum() > 0:
mean = df_test[col].mean()
df_test[col] = df_test[col].fillna(mean)
# repete ação de treino no teste
X_test = df_test[
["full_sq", "life_sq", "floor", "max_floor", "material", "build_year", "num_room"]
]
y_pred = modelo.predict(X_test)
# o modelo está sendo usado para fazer previsões com dados que não foram usados para treiná-lo, o que é uma forma importante de avaliar a capacidade do modelo de generalizar para novos dados.
y_pred = np.exp(y_pred)
# volta o arquivo para o tamanho original
output = pd.DataFrame({"id": df_test.id, "price_doc": y_pred})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
output.head()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/453/129453736.ipynb
| null | null |
[{"Id": 129453736, "ScriptId": 38491194, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14641641, "CreationDate": "05/14/2023 01:01:07", "VersionNumber": 1.0, "Title": "AC2-Regre\u00e7\u00e3o", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 206.0, "LinesInsertedFromPrevious": 70.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 136.0, "LinesInsertedFromFork": 70.0, "LinesDeletedFromFork": 53.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 136.0, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
import zipfile
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_log_error
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso, Ridge, ElasticNet
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
#
# Esse é um trecho de código Python que importa as bibliotecas necessárias para trabalhar com análise de dados e aprendizado de máquina, define algumas funções de avaliação de modelos e carrega um conjunto de dados de um diretório específico (no caso, '/kaggle/input') usando um loop 'for'. As bibliotecas importadas incluem o numpy, pandas, sklearn e os modelos de regressão linear Lasso, Ridge e ElasticNet.
z = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/train.csv.zip")
z.extractall()
t = zipfile.ZipFile("/kaggle/input/sberbank-russian-housing-market/test.csv.zip")
t.extractall()
# Descompacta arquivos
df_train = pd.read_csv("/kaggle/working/train.csv")
df_test = pd.read_csv("/kaggle/working/test.csv")
# Carrega arquivos
num_linhas_train = len(df_train)
print("Quantidade de linhas do arquivo 'train.csv'=", num_linhas_train)
num_linhas_test = len(df_test)
print("\nQuantidade de linhas do arquivo 'test.csv'=", num_linhas_test)
dif_linhas = num_linhas_train - num_linhas_test
print("\nDiferença de quantidade de linhas =", dif_linhas)
# Este código em Python está calculando o número de linhas de dois arquivos diferentes, 'train.csv' e 'test.csv'. Primeiro, ele armazena o número de linhas do arquivo 'train.csv' em 'num_linhas_train' usando a função 'len()'. Em seguida, ele imprime o número de linhas do arquivo 'train.csv'. Ele faz o mesmo para o arquivo 'test.csv' e armazena o número de linhas em 'num_linhas_test'. Finalmente, ele calcula a diferença entre as duas quantidades de linhas e a armazena em 'dif_linhas' e imprime o resultado.
df_train.head()
# Esse código Python exibe as primeiras linhas do conjunto de dados 'df_train' em um formato de tabela . O método 'head()' retorna por padrão as primeiras 5 linhas do conjunto de dados.
df_test.head()
#
# Esse código Python exibe as primeiras linhas do conjunto de dados 'df_test' em um formato de tabela. O método 'head()' retorna por padrão as primeiras 5 linhas do conjunto de dados.
for f in df_train.columns:
if df_train[f].dtype == "object":
lbl = LabelEncoder()
lbl.fit(list(df_train[f].values))
df_train[f] = lbl.transform(list(df_train[f].values))
df_train.head()
for col in df_train.columns:
if df_train[col].isnull().sum() > 0:
mean = df_train[col].mean()
df_train[col] = df_train[col].fillna(mean)
# Esse código aplica a codificação de rótulos em colunas categóricas do conjunto de dados de treino, transformando valores categóricos em valores numéricos.
df_train.head()
X = df_train[
["full_sq", "life_sq", "floor", "max_floor", "material", "build_year", "num_room"]
]
y = np.log(df_train.price_doc)
# Ajusta as variaveis para o machine learn conseguir utilizar
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
# Cria um conjunto de treino e teste
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# aplica uma padronização ('StandardScaler') aos dados de entrada 'X_train' e 'X_test' do conjunto de dados de treinamento. O método 'fit' ajusta a escala dos dados usando a média e o desvio padrão do conjunto de treino e o método 'transform' aplica a mesma escala aos conjuntos de treino e teste. Isso ajuda a garantir que os dados de entrada estejam na mesma escala antes de serem usados para treinar o modelo.
modelo = Lasso()
modelo.fit(X_train, y_train)
# efine um modelo de regressão linear com regularização, e ajusta o modelo aos conjuntos de treinamento 'X_train' e 'y_train' usando o método 'fit'. A regularização L1 ajuda a eliminar características menos importantes do conjunto de dados, o que ajuda a evitar o overfitting (sobreajuste) do modelo.
modelo = LinearRegression()
modelo.fit(X_train, y_train)
# usada para prever valores contínuos de uma variável alvo a partir de um conjunto de variáveis
modelo = ElasticNet()
modelo.fit(X_train, y_train)
# Essa regularização é útil para evitar o overfitting em conjuntos de dados com muitas variáveis preditoras.
modelo = Ridge()
modelo.fit(X_train, y_train)
# A combinação dessas duas regularizações permite que o modelo se adapte bem a conjuntos de dados com muitas características e poucas observações.
modelo.coef_, modelo.intercept_
# Esses comandos retornam os coeficientes e o intercepto da regressão linear ajustada pelo modelo treinado.
y_pred = modelo.predict(X_train)
mae = mean_absolute_error(y_train, y_pred)
mape = mean_absolute_percentage_error(y_train, y_pred)
rmse = mean_squared_error(y_train, y_pred) ** 0.5
r2 = r2_score(y_train, y_pred)
rmsle = np.sqrt(mean_squared_log_error(y_train, y_pred))
print("MAE:", mae)
print("MAPE:", mape)
print("RMSE:", rmse)
print("R2:", r2)
print("RMSLE:", rmsle)
print("")
# Esses comandos realizam a avaliação de desempenho do modelo de regressão linear, comparando as previsões feitas pelo modelo treinado
y_pred = modelo.predict(X_test)
mae = mean_absolute_error(y_test, y_pred)
mape = mean_absolute_percentage_error(y_test, y_pred)
rmse = mean_squared_error(y_test, y_pred) ** 0.5
r2 = r2_score(y_test, y_pred)
rmsle = np.sqrt(mean_squared_log_error(y_test, y_pred))
print("MAE:", mae)
print("MAPE:", mape)
print("RMSE:", rmse)
print("R2:", r2)
print("RMSLE:", rmsle)
# Esses comandos realizam a avaliação de desempenho do modelo de regressão linear, comparando as previsões feitas pelo modelo treinado
for f in df_test.columns:
if df_test[f].dtype == "object":
lbl = LabelEncoder()
lbl.fit(list(df_test[f].values))
df_test[f] = lbl.transform(list(df_test[f].values))
# Repete ação feita no treino no teste
for col in df_test.columns:
if df_test[col].isnull().sum() > 0:
mean = df_test[col].mean()
df_test[col] = df_test[col].fillna(mean)
# repete ação de treino no teste
X_test = df_test[
["full_sq", "life_sq", "floor", "max_floor", "material", "build_year", "num_room"]
]
y_pred = modelo.predict(X_test)
# o modelo está sendo usado para fazer previsões com dados que não foram usados para treiná-lo, o que é uma forma importante de avaliar a capacidade do modelo de generalizar para novos dados.
y_pred = np.exp(y_pred)
# volta o arquivo para o tamanho original
output = pd.DataFrame({"id": df_test.id, "price_doc": y_pred})
output.to_csv("submission.csv", index=False)
print("Your submission was successfully saved!")
output.head()
| false | 0 | 2,328 | 0 | 2,328 | 2,328 |
||
129453974
|
# Things to use
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
# ## Compare Log Loss with Weighted Log Loss
# The average log-loss
def log_loss(y_true, y_pred):
# y_true: correct labels 0, 1
# y_pred: predicted probabilities of class=1
# calculate the number of observations for each class
N_0 = np.sum(1 - y_true)
N_1 = np.sum(y_true)
# calculate the predicted probabilities for each class
p_1 = np.clip(y_pred, 1e-15, 1 - 1e-15)
p_0 = 1 - p_1
# calculate the summed log loss for each class
log_loss_0 = -np.sum((1 - y_true) * np.log(p_0))
log_loss_1 = -np.sum(y_true * np.log(p_1))
# calculate the summed logarithmic loss
log_loss = log_loss_0 + log_loss_1
# return the average log loss
return log_loss / (N_0 + N_1)
# The weighted average log-loss
# slightly modified from
# https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/409691
def balanced_log_loss(y_true, y_pred):
# y_true: correct labels 0, 1
# y_pred: predicted probabilities of class=1
# calculate the number of observations for each class
N_0 = np.sum(1 - y_true)
N_1 = np.sum(y_true)
# calculate the weights for each class to balance classes
w_0 = 1 / N_0
w_1 = 1 / N_1
# calculate the predicted probabilities for each class
p_1 = np.clip(y_pred, 1e-15, 1 - 1e-15)
p_0 = 1 - p_1
# calculate the summed log loss for each class
log_loss_0 = -np.sum((1 - y_true) * np.log(p_0))
log_loss_1 = -np.sum(y_true * np.log(p_1))
# calculate the weighted summed logarithmic loss
# (factgor of 2 included to give same result as LL with balanced input)
balanced_log_loss = 2 * (w_0 * log_loss_0 + w_1 * log_loss_1) / (w_0 + w_1)
# return the average log loss
return balanced_log_loss / (N_0 + N_1)
# Examples
# Input balanced classes
y_true = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * 0.5
print(log_loss(y_true, y_pred), balanced_log_loss(y_true, y_pred))
# 0.6931471805599453 0.6931471805599453
# Input unbalanced classes
y_true = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1])
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * 0.5
print(log_loss(y_true, y_pred), balanced_log_loss(y_true, y_pred))
# 0.6931471805599453 0.5198603854199589
# ### Balanced samples
# LL is optimum when p_y1 = N1/N = 0.5 here
# Balanced LL is also optimum when p_y1 = 0.5
y_true = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
py1 = np.zeros(99)
LL = np.zeros(99)
LLbal = np.zeros(99)
for ip in range(1, 100):
py1[ip - 1] = ip / 100
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * ip / 100
LL[ip - 1] = log_loss(y_true, y_pred)
LLbal[ip - 1] = balanced_log_loss(y_true, y_pred)
plt.plot(py1, LL, "b")
plt.ylim(0, 2.0)
plt.plot(py1, LLbal, "r--")
plt.xlabel("Predicted Probability")
plt.ylabel("Average Log Loss")
plt.title("Comparing LogLoss (blue) with Weighted LogLoss (red)")
# ### Unbalanced samples
# LL is optimum when p_y1 = N1/N = 0.25 here
# Balanced LL is optimum when p_y1 = 0.5
y_true = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1])
py1 = np.zeros(99)
LL = np.zeros(99)
LLbal = np.zeros(99)
for ip in range(1, 100):
py1[ip - 1] = ip / 100
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * ip / 100
LL[ip - 1] = log_loss(y_true, y_pred)
LLbal[ip - 1] = balanced_log_loss(y_true, y_pred)
plt.plot(py1, LL, "b")
plt.ylim(0, 2.0)
plt.plot(py1, LLbal, "r--")
plt.xlabel("Predicted Probability")
plt.ylabel("Average Log Loss")
plt.title("Comparing LogLoss (blue) with Weighted LogLoss (red)")
# ## Look at the data files...
# Available files
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# Getting the data
dat_dir = "../input/icr-identify-age-related-conditions/"
# CSV files
training_file = "train.csv"
greeks_file = "greeks.csv"
test_file = "test.csv"
# Read the training data
df_training = pd.read_csv(dat_dir + training_file)
df_training
# Probabilites of the classes using all training data
prob_train_1 = sum(df_training.Class) / len(df_training)
prob_train_0 = 1 - prob_train_1
print(prob_train_0, prob_train_1)
# Read the test file
df_test = pd.read_csv(dat_dir + test_file)
# Make predictions
df_predict = pd.DataFrame(data=df_test.Id, columns=["Id"])
df_predict["class_0"] = prob_train_0
df_predict["class_1"] = prob_train_1
df_predict
# Save the result as the submission
df_predict.to_csv("submission.csv", index=False, float_format="%.6f")
# Check the file
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/453/129453974.ipynb
| null | null |
[{"Id": 129453974, "ScriptId": 38455086, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1496178, "CreationDate": "05/14/2023 01:05:45", "VersionNumber": 2.0, "Title": "ICR 2023 - Balanced Log Loss", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 148.0, "LinesInsertedFromPrevious": 108.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 40.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# Things to use
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
# ## Compare Log Loss with Weighted Log Loss
# The average log-loss
def log_loss(y_true, y_pred):
# y_true: correct labels 0, 1
# y_pred: predicted probabilities of class=1
# calculate the number of observations for each class
N_0 = np.sum(1 - y_true)
N_1 = np.sum(y_true)
# calculate the predicted probabilities for each class
p_1 = np.clip(y_pred, 1e-15, 1 - 1e-15)
p_0 = 1 - p_1
# calculate the summed log loss for each class
log_loss_0 = -np.sum((1 - y_true) * np.log(p_0))
log_loss_1 = -np.sum(y_true * np.log(p_1))
# calculate the summed logarithmic loss
log_loss = log_loss_0 + log_loss_1
# return the average log loss
return log_loss / (N_0 + N_1)
# The weighted average log-loss
# slightly modified from
# https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/409691
def balanced_log_loss(y_true, y_pred):
# y_true: correct labels 0, 1
# y_pred: predicted probabilities of class=1
# calculate the number of observations for each class
N_0 = np.sum(1 - y_true)
N_1 = np.sum(y_true)
# calculate the weights for each class to balance classes
w_0 = 1 / N_0
w_1 = 1 / N_1
# calculate the predicted probabilities for each class
p_1 = np.clip(y_pred, 1e-15, 1 - 1e-15)
p_0 = 1 - p_1
# calculate the summed log loss for each class
log_loss_0 = -np.sum((1 - y_true) * np.log(p_0))
log_loss_1 = -np.sum(y_true * np.log(p_1))
# calculate the weighted summed logarithmic loss
# (factgor of 2 included to give same result as LL with balanced input)
balanced_log_loss = 2 * (w_0 * log_loss_0 + w_1 * log_loss_1) / (w_0 + w_1)
# return the average log loss
return balanced_log_loss / (N_0 + N_1)
# Examples
# Input balanced classes
y_true = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * 0.5
print(log_loss(y_true, y_pred), balanced_log_loss(y_true, y_pred))
# 0.6931471805599453 0.6931471805599453
# Input unbalanced classes
y_true = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1])
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * 0.5
print(log_loss(y_true, y_pred), balanced_log_loss(y_true, y_pred))
# 0.6931471805599453 0.5198603854199589
# ### Balanced samples
# LL is optimum when p_y1 = N1/N = 0.5 here
# Balanced LL is also optimum when p_y1 = 0.5
y_true = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
py1 = np.zeros(99)
LL = np.zeros(99)
LLbal = np.zeros(99)
for ip in range(1, 100):
py1[ip - 1] = ip / 100
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * ip / 100
LL[ip - 1] = log_loss(y_true, y_pred)
LLbal[ip - 1] = balanced_log_loss(y_true, y_pred)
plt.plot(py1, LL, "b")
plt.ylim(0, 2.0)
plt.plot(py1, LLbal, "r--")
plt.xlabel("Predicted Probability")
plt.ylabel("Average Log Loss")
plt.title("Comparing LogLoss (blue) with Weighted LogLoss (red)")
# ### Unbalanced samples
# LL is optimum when p_y1 = N1/N = 0.25 here
# Balanced LL is optimum when p_y1 = 0.5
y_true = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1])
py1 = np.zeros(99)
LL = np.zeros(99)
LLbal = np.zeros(99)
for ip in range(1, 100):
py1[ip - 1] = ip / 100
y_pred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) * ip / 100
LL[ip - 1] = log_loss(y_true, y_pred)
LLbal[ip - 1] = balanced_log_loss(y_true, y_pred)
plt.plot(py1, LL, "b")
plt.ylim(0, 2.0)
plt.plot(py1, LLbal, "r--")
plt.xlabel("Predicted Probability")
plt.ylabel("Average Log Loss")
plt.title("Comparing LogLoss (blue) with Weighted LogLoss (red)")
# ## Look at the data files...
# Available files
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# Getting the data
dat_dir = "../input/icr-identify-age-related-conditions/"
# CSV files
training_file = "train.csv"
greeks_file = "greeks.csv"
test_file = "test.csv"
# Read the training data
df_training = pd.read_csv(dat_dir + training_file)
df_training
# Probabilites of the classes using all training data
prob_train_1 = sum(df_training.Class) / len(df_training)
prob_train_0 = 1 - prob_train_1
print(prob_train_0, prob_train_1)
# Read the test file
df_test = pd.read_csv(dat_dir + test_file)
# Make predictions
df_predict = pd.DataFrame(data=df_test.Id, columns=["Id"])
df_predict["class_0"] = prob_train_0
df_predict["class_1"] = prob_train_1
df_predict
# Save the result as the submission
df_predict.to_csv("submission.csv", index=False, float_format="%.6f")
# Check the file
| false | 0 | 1,978 | 0 | 1,978 | 1,978 |
||
129453559
|
<jupyter_start><jupyter_text>Personal Key Indicators of Heart Disease
# Key Indicators of Heart Disease
## 2020 annual CDC survey data of 400k adults related to their health status
### What topic does the dataset cover?
According to the [CDC](https://www.cdc.gov/heartdisease/risk_factors.htm), heart disease is one of the leading causes of death for people of most races in the US (African Americans, American Indians and Alaska Natives, and white people). About half of all Americans (47%) have at least 1 of 3 key risk factors for heart disease: high blood pressure, high cholesterol, and smoking. Other key indicator include diabetic status, obesity (high BMI), not getting enough physical activity or drinking too much alcohol. Detecting and preventing the factors that have the greatest impact on heart disease is very important in healthcare. Computational developments, in turn, allow the application of machine learning methods to detect "patterns" from the data that can predict a patient's condition.
### Where did the dataset come from and what treatments did it undergo?
Originally, the dataset come from the CDC and is a major part of the Behavioral Risk Factor Surveillance System (BRFSS), which conducts annual telephone surveys to gather data on the health status of U.S. residents. As the [CDC](https://www.cdc.gov/heartdisease/risk_factors.htm) describes: "Established in 1984 with 15 states, BRFSS now collects data in all 50 states as well as the District of Columbia and three U.S. territories. BRFSS completes more than 400,000 adult interviews each year, making it the largest continuously conducted health survey system in the world.". The most recent dataset (as of February 15, 2022) includes data from 2020. It consists of 401,958 rows and 279 columns. The vast majority of columns are questions asked to respondents about their health status, such as "Do you have serious difficulty walking or climbing stairs?" or "Have you smoked at least 100 cigarettes in your entire life? [Note: 5 packs = 100 cigarettes]". In this dataset, I noticed many different factors (questions) that directly or indirectly influence heart disease, so I decided to select the most relevant variables from it and do some cleaning so that it would be usable for machine learning projects.
### What can you do with this dataset?
As described above, the original dataset of nearly 300 variables was reduced to just about 20 variables. In addition to classical EDA, this dataset can be used to apply a range of machine learning methods, most notably classifier models (logistic regression, SVM, random forest, etc.). You should treat the variable "HeartDisease" as a binary ("Yes" - respondent had heart disease; "No" - respondent had no heart disease). But note that classes are not balanced, so the classic model application approach is not advisable. Fixing the weights/undersampling should yield significantly betters results. Based on the dataset, I constructed a logistic regression model and embedded it in an application you might be inspired by: https://share.streamlit.io/kamilpytlak/heart-condition-checker/main/app.py. Can you indicate which variables have a significant effect on the likelihood of heart disease?
Kaggle dataset identifier: personal-key-indicators-of-heart-disease
<jupyter_code>import pandas as pd
df = pd.read_csv('personal-key-indicators-of-heart-disease/heart_2020_cleaned.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 319795 entries, 0 to 319794
Data columns (total 18 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 HeartDisease 319795 non-null object
1 BMI 319795 non-null float64
2 Smoking 319795 non-null object
3 AlcoholDrinking 319795 non-null object
4 Stroke 319795 non-null object
5 PhysicalHealth 319795 non-null float64
6 MentalHealth 319795 non-null float64
7 DiffWalking 319795 non-null object
8 Sex 319795 non-null object
9 AgeCategory 319795 non-null object
10 Race 319795 non-null object
11 Diabetic 319795 non-null object
12 PhysicalActivity 319795 non-null object
13 GenHealth 319795 non-null object
14 SleepTime 319795 non-null float64
15 Asthma 319795 non-null object
16 KidneyDisease 319795 non-null object
17 SkinCancer 319795 non-null object
dtypes: float64(4), object(14)
memory usage: 43.9+ MB
<jupyter_text>Examples:
{
"HeartDisease": "No",
"BMI": 16.6,
"Smoking": "Yes",
"AlcoholDrinking": "No",
"Stroke": "No",
"PhysicalHealth": 3,
"MentalHealth": 30,
"DiffWalking": "No",
"Sex": "Female",
"AgeCategory": "55-59",
"Race": "White",
"Diabetic": "Yes",
"PhysicalActivity": "Yes",
"GenHealth": "Very good",
"SleepTime": 5,
"Asthma": "Yes",
"KidneyDisease": "No",
"SkinCancer": "Yes"
}
{
"HeartDisease": "No",
"BMI": 20.34,
"Smoking": "No",
"AlcoholDrinking": "No",
"Stroke": "Yes",
"PhysicalHealth": 0,
"MentalHealth": 0,
"DiffWalking": "No",
"Sex": "Female",
"AgeCategory": "80 or older",
"Race": "White",
"Diabetic": "No",
"PhysicalActivity": "Yes",
"GenHealth": "Very good",
"SleepTime": 7,
"Asthma": "No",
"KidneyDisease": "No",
"SkinCancer": "No"
}
{
"HeartDisease": "No",
"BMI": 26.58,
"Smoking": "Yes",
"AlcoholDrinking": "No",
"Stroke": "No",
"PhysicalHealth": 20,
"MentalHealth": 30,
"DiffWalking": "No",
"Sex": "Male",
"AgeCategory": "65-69",
"Race": "White",
"Diabetic": "Yes",
"PhysicalActivity": "Yes",
"GenHealth": "Fair",
"SleepTime": 8,
"Asthma": "Yes",
"KidneyDisease": "No",
"SkinCancer": "No"
}
{
"HeartDisease": "No",
"BMI": 24.21,
"Smoking": "No",
"AlcoholDrinking": "No",
"Stroke": "No",
"PhysicalHealth": 0,
"MentalHealth": 0,
"DiffWalking": "No",
"Sex": "Female",
"AgeCategory": "75-79",
"Race": "White",
"Diabetic": "No",
"PhysicalActivity": "No",
"GenHealth": "Good",
"SleepTime": 6,
"Asthma": "No",
"KidneyDisease": "No",
"SkinCancer": "Yes"
}
<jupyter_script>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# basic libraries
import numpy as np
import pandas as pd
# visulaization modules
import matplotlib.pyplot as plt
import seaborn as sns
# import plotly.express as px
# import plotly.graph_objs as go
np.random.seed(42)
pd.set_option("display.max_colwidth", -1)
palette = sns.color_palette(["#2B2E4A", "#E84545", "#E4EA8C", "#3F6C45"], as_cmap=True)
df = pd.read_csv(
"/kaggle/input/personal-key-indicators-of-heart-disease/heart_2020_cleaned.csv"
)
df.head()
column_descriptions = pd.DataFrame(
[
[
"Respondents that have ever reported having coronary heart disease (CHD) or myocardial infarction (MI)",
"Body Mass Index (BMI)",
"Have you smoked at least 100 cigarettes in your entire life? [Note: 5 packs = 100 cigarettes]",
"Heavy drinkers (adult men having more than 14 drinks per week and adult women having more than 7 drinks per week",
"(Ever told) (you had) a stroke?",
"Now thinking about your physical health, which includes physical illness and injury, for how many days during the past 30 days was your physical health not good? (0-30 days)",
"Thinking about your mental health, for how many days during the past 30 days was your mental health not good? (0-30 days)",
"Do you have serious difficulty walking or climbing stairs?",
"Are you male or female?",
"Fourteen-level age category",
"Imputed race/ethnicity value",
"(Ever told) (you had) diabetes?",
"Adults who reported doing physical activity or exercise during the past 30 days other than their regular job",
"Would you say that in general your health is...",
"On average, how many hours of sleep do you get in a 24-hour period?",
"(Ever told) (you had) asthma?",
"Not including kidney stones, bladder infection or incontinence, were you ever told you had kidney disease?",
"(Ever told) (you had) skin cancer?",
]
],
columns=df.columns,
)
df.info()
df.describe()
df.describe().columns
df.head()
# #
# Label Distribution
#
temp = pd.DataFrame(df.groupby("Stroke")["Stroke"].count())
ax = sns.barplot(
y=temp.index,
x=temp.columns[0],
data=temp,
palette=palette,
)
plt.text(
310000,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
plt.text(
310000,
0.1,
f"{round(temp.iloc[0].values[0]/len(df), 2)*100}% ({temp.iloc[0].values[0]})",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
plt.text(
15000,
1,
"Stroke",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#E84545",
},
)
plt.text(
15000,
1.1,
f"{round(temp.iloc[1].values[0]/len(df), 2)*100}% ({temp.iloc[1].values[0]})",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#E84545",
},
)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(True)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.set_title(
"Label Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 18},
)
plt.show()
#
# Numerical Data
# Numerical Data Description
#
numerical_cols = df.describe().columns
column_descriptions[numerical_cols].T
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(True)
ax1.set_xlim(xmin=-250, xmax=2000)
ax1.set_ylim(ymin=-1, ymax=3.5)
# dumbbell plot of stoke and healthy people
def bmi_label(bmi):
if bmi < 18.5:
return "Underweight"
elif bmi < 25:
return "Healthy Weight"
elif bmi < 30:
return "Overweight"
else:
return "Obese"
pos_df = (
df[df["Stroke"] == "Yes"]
.BMI.map(bmi_label)
.value_counts()
.reindex(index=["Obese", "Overweight", "Healthy Weight", "Underweight"])
)
neg_df = (
df[df["Stroke"] == "No"]
.BMI.map(bmi_label)
.value_counts()
.reindex(index=["Obese", "Overweight", "Healthy Weight", "Underweight"])
)
if pos_df.values.max() > neg_df.values.max():
c = 1900 / pos_df.values.max()
else:
c = 1900 / neg_df.values.max()
xmin = []
xmax = []
for i in ["Obese", "Overweight", "Healthy Weight", "Underweight"]:
if pos_df[i] > neg_df[i]:
xmax.append(pos_df[i] * c)
xmin.append(neg_df[i] * c)
xmax.append(neg_df[i] * c)
xmin.append(pos_df[i] * c)
ax1.hlines(
y=["Obese", "Overweight", "Healthy Weight", "Underweight"],
xmin=xmin,
xmax=xmax,
color="grey",
**{"linewidth": 0.5},
)
sns.scatterplot(
y=pos_df.index,
x=pos_df.values / 60,
s=pos_df.values * 0.03,
color="#E84545",
ax=ax1,
alpha=1,
)
sns.scatterplot(
y=neg_df.index,
x=neg_df.values / 60,
s=neg_df.values * 0.03,
color="#2B2E4A",
ax=ax1,
alpha=1,
)
ax1.set_yticklabels(
labels=[
"Obese\n(BMI > 30)",
"Overweight\n(BMI < 30)",
"Healthy Weight\n(BMI < 25)",
"Underweight\n(BMI < 18.5)",
],
fontdict={
"font": "Serif",
"color": "#50312F",
"fontsize": 16,
"fontweight": "bold",
"color": "black",
},
)
ax1.text(
-750,
-1.5,
"BMI Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
ax1.text(
1030,
-0.4,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(1300, -0.4, "|", {"color": "black", "size": "16", "weight": "bold"})
ax1.text(
1340,
-0.4,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-750,
-0.8,
"There are more strokes for people who are obese or overweight;\nhowever, people who are at a healthy weight are not safe.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax1.text(
pos_df.values[0] * c - 230,
0.05,
pos_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[0] * c + 140,
0.05,
neg_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[1] * c - 230,
1.05,
pos_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[1] * c + 140,
1.05,
neg_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[2] * c - 230,
2.05,
pos_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[2] * c + 140,
2.05,
neg_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[3] * c - 230,
3.05,
pos_df.values[3],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[3] * c + 140,
3.05,
neg_df.values[3],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
# distribution plots ---- only single variable
sns.kdeplot(
data=df,
x="BMI",
ax=ax2,
shade=True,
color="#2B2E4A",
alpha=1,
)
ax2.set_xlabel(
"Body mass index of a person",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-17,
0.095,
"Overall BMI Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-17,
0.085,
"BMI is highly right-skewed, and average BMI is around 30.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
63,
0.06,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
71.5, 0.06, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
74.5,
0.06,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
88.7, 0.06, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
92.5,
0.06,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="BMI",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="BMI",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Body mass index of a person",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-15,
0.095,
"BMI-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-15,
0.085,
"There is a peak in strokes at around 30 BMI",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
66,
0.06,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(82, 0.06, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
84,
0.06,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and BMI",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
# ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
# ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(False)
# dumbbell plot of stoke and healthy people
sns.boxplot(y=df["Stroke"], x=df["PhysicalHealth"], ax=ax1, palette=palette)
ax1.set_xlabel(
"Days being ill or injured",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax1.text(
-5,
-0.75,
"Physical Health Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
# ax1.text(26.3,-0.4, 'Strokes', {'font': 'Serif','weight':'bold','size': '16','weight':'bold','style':'normal', 'color':'#E84545'})
# ax1.text(30.2,-0.4, '|', {'color':'black' , 'size':'16', 'weight': 'bold'})
# ax1.text(30.6,-0.4, 'Healthy', {'font': 'Serif','weight':'bold', 'size': '16','style':'normal', 'weight':'bold','color':'#2B2E4A'})
ax1.text(
-5,
1,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(
-5,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-5,
-0.5,
"People who are rarely ill or injured are less likely to have strokes,\
while if you \nare ill or injured for more than 5 days within 30 days you are more likely to get a stroke",
{"font": "Serif", "size": "16", "color": "black"},
)
# distribution plots ---- only single variable
sns.kdeplot(data=df, x="PhysicalHealth", ax=ax2, shade=True, color="#2B2E4A", alpha=1)
ax2.set_xlabel(
"Days being ill or injured",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-15,
0.6,
"Overall Physical Health Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-15,
0.48,
"Physical Health is highly right-skewed, and it seems that most \nof the participants were not injured or ill for the past 30 days",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
25,
0.4,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
28.5, 0.4, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
30,
0.4,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
35.5, 0.4, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
37,
0.4,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="PhysicalHealth",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="PhysicalHealth",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Days being ill or injured",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-20,
0.6,
"Physical Health-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-20,
0.5,
"There is a peak in strokes at around 0 and 30 days of being injured or ill",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
28,
0.4,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(35.8, 0.4, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
37,
0.4,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and Physical Health",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
# ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
# ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(False)
# dumbbell plot of stoke and healthy people
sns.boxplot(y=df["Stroke"], x=df["MentalHealth"], ax=ax1, palette=palette)
ax1.set_xlabel(
"Days of bad mental health",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax1.text(
-5,
-0.75,
"Mental Health Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
# ax1.text(26.3,-0.4, 'Strokes', {'font': 'Serif','weight':'bold','size': '16','weight':'bold','style':'normal', 'color':'#E84545'})
# ax1.text(30.2,-0.4, '|', {'color':'black' , 'size':'16', 'weight': 'bold'})
# ax1.text(30.6,-0.4, 'Healthy', {'font': 'Serif','weight':'bold', 'size': '16','style':'normal', 'weight':'bold','color':'#2B2E4A'})
ax1.text(
-5,
1,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(
-5,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-5,
-0.5,
"People who have good mental health are less likely to have a stroke, but the chances of having a \n\
stroke increase if bad mental health persists for more than seven days.",
{"font": "Serif", "size": "16", "color": "black"},
)
# distribution plots ---- only single variable
sns.kdeplot(data=df, x="MentalHealth", ax=ax2, shade=True, color="#2B2E4A", alpha=1)
ax2.set_xlabel(
"Days of bad mental health",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-15,
0.6,
"Overall Mental Health Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-15,
0.48,
"Mental Health is highly right-skewed, and it seems that most \nof the participants had good mental health for the past 30 days.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
25,
0.4,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
28.5, 0.4, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
30,
0.4,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
35.5, 0.4, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
37,
0.4,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="MentalHealth",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="MentalHealth",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Days of bad mental health",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-20,
0.52,
"Mental Health-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-20,
0.45,
"Majority of people who have strokes have good mental health.",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
28,
0.4,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(35.8, 0.4, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
37,
0.4,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and MentalHealth",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
# ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
ax1.axes.get_xaxis().set_visible(False)
# ax1.axes.get_yaxis().set_visible(False)
ax1.set_xlim(xmin=-250, xmax=1800)
# dumbbell plot of stoke and healthy people
def func(time):
if time < 6:
return "Sleep Deprived"
elif time < 9:
return "Healthy Sleep"
elif time >= 9:
return "Excessive Sleeping"
pos_df = (
df[df["Stroke"] == "Yes"]
.SleepTime.map(func)
.value_counts()
.reindex(index=["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"])
)
neg_df = (
df[df["Stroke"] == "No"]
.SleepTime.map(func)
.value_counts()
.reindex(index=["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"])
)
if pos_df.values.max() > neg_df.values.max():
c = 1600 / pos_df.values.max()
else:
c = 1600 / neg_df.values.max()
xmin = []
xmax = []
for i in ["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"]:
if pos_df[i] > neg_df[i]:
xmax.append(pos_df[i] * c)
xmin.append(neg_df[i] * c)
xmax.append(neg_df[i] * c)
xmin.append(pos_df[i] * c)
ax1.hlines(
y=["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"],
xmin=xmin,
xmax=xmax,
color="grey",
**{"linewidth": 0.5},
)
sns.scatterplot(
y=pos_df.index,
x=pos_df.values * c,
s=pos_df.values * 0.03,
color="#E84545",
ax=ax1,
alpha=1,
)
sns.scatterplot(
y=neg_df.index,
x=neg_df.values * c,
s=neg_df.values * 0.03,
color="#2B2E4A",
ax=ax1,
alpha=1,
)
ax1.set_yticklabels(
labels=[
"Excessive Sleeping\n(9h or more)",
"Healthy Sleep\n(7h to 8h)",
"Sleep Deprived\n(less than 6h)",
],
fontdict={
"font": "Serif",
"color": "#50312F",
"fontsize": 16,
"fontweight": "bold",
"color": "black",
},
)
ax1.text(
-400,
-0.55,
"Sleep Time Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
ax1.text(
1750,
0,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(2035, 0, "|", {"color": "black", "size": "16", "weight": "bold"})
ax1.text(
2070,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-400,
-0.3,
"There are more people who get strokes that get a healthy amount of sleep.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax1.text(
pos_df.values[0] * c - 230,
0.05,
pos_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[0] * c + 140,
0.05,
neg_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[1] * c - 230,
1.05,
pos_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[1] * c + 140,
1.05,
neg_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[2] * c - 230,
2.05,
pos_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[2] * c + 140,
2.05,
neg_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
# distribution plots ---- only single variable
sns.kdeplot(data=df, x="SleepTime", ax=ax2, shade=True, color="#2B2E4A", alpha=1)
ax2.set_xlabel(
"Hours Slept on Average",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-8,
1.4,
"Overall Sleep Time Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-8,
1.13,
"Most people get 6 to 8 hours of sleep.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
20,
0.8,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
22.6, 0.8, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
23.5,
0.8,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
27.5, 0.8, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
28.5,
0.8,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="SleepTime",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="SleepTime",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Hours Slept on Average",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-9,
1.35,
"Sleep Time-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-9,
1.18,
"Majority of people who have strokes get 6 to 8 hours of sleep.",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
20,
0.8,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(24.6, 0.8, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
25.2,
0.8,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and Sleep Time",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
#
# Categorical Data
#
from pywaffle import Waffle
stroke_gen = df[df["Stroke"] == "Yes"]["Sex"].value_counts()
healthy_gen = df[df["Stroke"] == "No"]["Sex"].value_counts()
female = df["Sex"].value_counts().values[0]
male = df["Sex"].value_counts().values[1]
stroke_female = int(round(stroke_gen.values[0] / female * 100, 0))
stroke_male = int(round(stroke_gen.values[1] / male * 100, 0))
healthy_female = int(round(healthy_gen.values[0] / female * 100, 0))
healthy_male = int(round(healthy_gen.values[1] / male * 100, 0))
female_per = int(round(female / (female + male) * 100, 0))
male_per = int(round(male / (female + male) * 100, 0))
fig = plt.figure(
FigureClass=Waffle,
constrained_layout=True,
figsize=(6, 6),
facecolor="#f6f5f5",
dpi=100,
plots={
121: {
"rows": 7,
"columns": 7,
"values": [healthy_male, stroke_male],
"colors": ["#512b58", "#fe346e"],
"vertical": True,
"interval_ratio_y": 0.1,
"interval_ratio_x": 0.1,
"icons": "male",
"icon_legend": False,
"icon_size": 20,
"plot_anchor": "C",
"alpha": 0.1,
},
122: {
"rows": 7,
"columns": 7,
"values": [healthy_female, stroke_female],
"colors": ["#512b58", "#fe346e"],
"vertical": True,
"interval_ratio_y": 0.1,
"interval_ratio_x": 0.1,
"icons": "female",
"icon_legend": False,
"icon_size": 20,
"plot_anchor": "C",
"alpha": 0.1,
},
},
)
fig.text(
0.0,
0.8,
"Gender Risk for Stroke - effect of gender on strokes?",
{"font": "Serif", "size": 20, "color": "black", "weight": "bold"},
)
fig.text(
0.0,
0.73,
"Risk of stroke in both male and female are same,\nprove our initial assumption is wrong. ",
{"font": "Serif", "size": 13, "color": "black", "weight": "normal"},
alpha=0.7,
)
fig.text(
0.24,
0.22,
"ooo",
{"font": "Serif", "size": 16, "weight": "bold", "color": "#f6f5f5"},
)
fig.text(
0.65,
0.22,
"ooo",
{"font": "Serif", "size": 16, "weight": "bold", "color": "#f6f5f5"},
)
fig.text(
0.23,
0.28,
"{}%".format(healthy_male),
{"font": "Serif", "size": 20, "weight": "bold", "color": "#512b58"},
alpha=1,
)
fig.text(
0.65,
0.28,
"{}%".format(healthy_female),
{"font": "Serif", "size": 20, "weight": "bold", "color": "#512b58"},
alpha=1,
)
fig.text(
0.21,
0.67,
"Male ({}%)".format(male_per),
{"font": "Serif", "size": 14, "weight": "bold", "color": "black"},
alpha=0.5,
)
fig.text(
0.61,
0.67,
"Female({}%)".format(female_per),
{"font": "Serif", "size": 14, "weight": "bold", "color": "black"},
alpha=0.5,
)
# fig.text(0., 0.8, 'Assumption was proven wrong', {'font':'Serif', 'size':24, 'color':'black', 'weight':'bold'})
fig.text(
0.9,
0.73,
"Stroke ",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#fe346e",
},
)
fig.text(1.02, 0.73, "|", {"color": "black", "size": "16", "weight": "bold"})
fig.text(
1.035,
0.73,
"No Stroke",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#512b58",
},
alpha=1,
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[2:10, 3:10]) # distribution plot
# ax3 = fig.add_subplot(gs[2:10, 10:17]) #hue distribution plot
ax1 = fig.add_subplot(gs[2:10, 17:24])
Waffle.make_waffle(
ax=ax3, # pass axis to make_waffle
rows=7,
columns=7,
values=[healthy_female, stroke_female],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="female",
font_size=44,
colors=[],
)
Waffle.make_waffle(
ax=ax2, # pass axis to make_waffle
rows=7,
columns=7,
values=[healthy_male, stroke_male],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="male",
font_size=44,
)
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 20:])
fig1 = plt.figure(
FigureClass=Waffle,
figsize=(3, 3),
rows=7,
columns=7,
values=[healthy_male, stroke_male],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="male",
)
fig2 = plt.figure(
FigureClass=Waffle,
figsize=(3, 3),
rows=7,
columns=7,
values=[healthy_male, stroke_male],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="male",
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/453/129453559.ipynb
|
personal-key-indicators-of-heart-disease
|
kamilpytlak
|
[{"Id": 129453559, "ScriptId": 38242429, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12226220, "CreationDate": "05/14/2023 00:57:07", "VersionNumber": 1.0, "Title": "Heart Disease", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 612.0, "LinesInsertedFromPrevious": 612.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 185519252, "KernelVersionId": 129453559, "SourceDatasetVersionId": 3191579}]
|
[{"Id": 3191579, "DatasetId": 1936563, "DatasourceVersionId": 3241234, "CreatorUserId": 9492796, "LicenseName": "CC0: Public Domain", "CreationDate": "02/16/2022 10:18:03", "VersionNumber": 2.0, "Title": "Personal Key Indicators of Heart Disease", "Slug": "personal-key-indicators-of-heart-disease", "Subtitle": "2020 annual CDC survey data of 400k adults related to their health status", "Description": "# Key Indicators of Heart Disease\n## 2020 annual CDC survey data of 400k adults related to their health status\n\n### What topic does the dataset cover?\nAccording to the [CDC](https://www.cdc.gov/heartdisease/risk_factors.htm), heart disease is one of the leading causes of death for people of most races in the US (African Americans, American Indians and Alaska Natives, and white people). About half of all Americans (47%) have at least 1 of 3 key risk factors for heart disease: high blood pressure, high cholesterol, and smoking. Other key indicator include diabetic status, obesity (high BMI), not getting enough physical activity or drinking too much alcohol. Detecting and preventing the factors that have the greatest impact on heart disease is very important in healthcare. Computational developments, in turn, allow the application of machine learning methods to detect \"patterns\" from the data that can predict a patient's condition.\n\n### Where did the dataset come from and what treatments did it undergo?\nOriginally, the dataset come from the CDC and is a major part of the Behavioral Risk Factor Surveillance System (BRFSS), which conducts annual telephone surveys to gather data on the health status of U.S. residents. As the [CDC](https://www.cdc.gov/heartdisease/risk_factors.htm) describes: \"Established in 1984 with 15 states, BRFSS now collects data in all 50 states as well as the District of Columbia and three U.S. territories. BRFSS completes more than 400,000 adult interviews each year, making it the largest continuously conducted health survey system in the world.\". The most recent dataset (as of February 15, 2022) includes data from 2020. It consists of 401,958 rows and 279 columns. The vast majority of columns are questions asked to respondents about their health status, such as \"Do you have serious difficulty walking or climbing stairs?\" or \"Have you smoked at least 100 cigarettes in your entire life? [Note: 5 packs = 100 cigarettes]\". In this dataset, I noticed many different factors (questions) that directly or indirectly influence heart disease, so I decided to select the most relevant variables from it and do some cleaning so that it would be usable for machine learning projects.\n\n### What can you do with this dataset?\nAs described above, the original dataset of nearly 300 variables was reduced to just about 20 variables. In addition to classical EDA, this dataset can be used to apply a range of machine learning methods, most notably classifier models (logistic regression, SVM, random forest, etc.). You should treat the variable \"HeartDisease\" as a binary (\"Yes\" - respondent had heart disease; \"No\" - respondent had no heart disease). But note that classes are not balanced, so the classic model application approach is not advisable. Fixing the weights/undersampling should yield significantly betters results. Based on the dataset, I constructed a logistic regression model and embedded it in an application you might be inspired by: https://share.streamlit.io/kamilpytlak/heart-condition-checker/main/app.py. Can you indicate which variables have a significant effect on the likelihood of heart disease?", "VersionNotes": "Data Update 2022/02/16", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1936563, "CreatorUserId": 9492796, "OwnerUserId": 9492796.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3191579.0, "CurrentDatasourceVersionId": 3241234.0, "ForumId": 1960316, "Type": 2, "CreationDate": "02/15/2022 19:28:49", "LastActivityDate": "02/15/2022", "TotalViews": 320603, "TotalDownloads": 46135, "TotalVotes": 694, "TotalKernels": 186}]
|
[{"Id": 9492796, "UserName": "kamilpytlak", "DisplayName": "Kamil Pytlak", "RegisterDate": "01/25/2022", "PerformanceTier": 1}]
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# basic libraries
import numpy as np
import pandas as pd
# visulaization modules
import matplotlib.pyplot as plt
import seaborn as sns
# import plotly.express as px
# import plotly.graph_objs as go
np.random.seed(42)
pd.set_option("display.max_colwidth", -1)
palette = sns.color_palette(["#2B2E4A", "#E84545", "#E4EA8C", "#3F6C45"], as_cmap=True)
df = pd.read_csv(
"/kaggle/input/personal-key-indicators-of-heart-disease/heart_2020_cleaned.csv"
)
df.head()
column_descriptions = pd.DataFrame(
[
[
"Respondents that have ever reported having coronary heart disease (CHD) or myocardial infarction (MI)",
"Body Mass Index (BMI)",
"Have you smoked at least 100 cigarettes in your entire life? [Note: 5 packs = 100 cigarettes]",
"Heavy drinkers (adult men having more than 14 drinks per week and adult women having more than 7 drinks per week",
"(Ever told) (you had) a stroke?",
"Now thinking about your physical health, which includes physical illness and injury, for how many days during the past 30 days was your physical health not good? (0-30 days)",
"Thinking about your mental health, for how many days during the past 30 days was your mental health not good? (0-30 days)",
"Do you have serious difficulty walking or climbing stairs?",
"Are you male or female?",
"Fourteen-level age category",
"Imputed race/ethnicity value",
"(Ever told) (you had) diabetes?",
"Adults who reported doing physical activity or exercise during the past 30 days other than their regular job",
"Would you say that in general your health is...",
"On average, how many hours of sleep do you get in a 24-hour period?",
"(Ever told) (you had) asthma?",
"Not including kidney stones, bladder infection or incontinence, were you ever told you had kidney disease?",
"(Ever told) (you had) skin cancer?",
]
],
columns=df.columns,
)
df.info()
df.describe()
df.describe().columns
df.head()
# #
# Label Distribution
#
temp = pd.DataFrame(df.groupby("Stroke")["Stroke"].count())
ax = sns.barplot(
y=temp.index,
x=temp.columns[0],
data=temp,
palette=palette,
)
plt.text(
310000,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
plt.text(
310000,
0.1,
f"{round(temp.iloc[0].values[0]/len(df), 2)*100}% ({temp.iloc[0].values[0]})",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
plt.text(
15000,
1,
"Stroke",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#E84545",
},
)
plt.text(
15000,
1.1,
f"{round(temp.iloc[1].values[0]/len(df), 2)*100}% ({temp.iloc[1].values[0]})",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#E84545",
},
)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(True)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.set_title(
"Label Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 18},
)
plt.show()
#
# Numerical Data
# Numerical Data Description
#
numerical_cols = df.describe().columns
column_descriptions[numerical_cols].T
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(True)
ax1.set_xlim(xmin=-250, xmax=2000)
ax1.set_ylim(ymin=-1, ymax=3.5)
# dumbbell plot of stoke and healthy people
def bmi_label(bmi):
if bmi < 18.5:
return "Underweight"
elif bmi < 25:
return "Healthy Weight"
elif bmi < 30:
return "Overweight"
else:
return "Obese"
pos_df = (
df[df["Stroke"] == "Yes"]
.BMI.map(bmi_label)
.value_counts()
.reindex(index=["Obese", "Overweight", "Healthy Weight", "Underweight"])
)
neg_df = (
df[df["Stroke"] == "No"]
.BMI.map(bmi_label)
.value_counts()
.reindex(index=["Obese", "Overweight", "Healthy Weight", "Underweight"])
)
if pos_df.values.max() > neg_df.values.max():
c = 1900 / pos_df.values.max()
else:
c = 1900 / neg_df.values.max()
xmin = []
xmax = []
for i in ["Obese", "Overweight", "Healthy Weight", "Underweight"]:
if pos_df[i] > neg_df[i]:
xmax.append(pos_df[i] * c)
xmin.append(neg_df[i] * c)
xmax.append(neg_df[i] * c)
xmin.append(pos_df[i] * c)
ax1.hlines(
y=["Obese", "Overweight", "Healthy Weight", "Underweight"],
xmin=xmin,
xmax=xmax,
color="grey",
**{"linewidth": 0.5},
)
sns.scatterplot(
y=pos_df.index,
x=pos_df.values / 60,
s=pos_df.values * 0.03,
color="#E84545",
ax=ax1,
alpha=1,
)
sns.scatterplot(
y=neg_df.index,
x=neg_df.values / 60,
s=neg_df.values * 0.03,
color="#2B2E4A",
ax=ax1,
alpha=1,
)
ax1.set_yticklabels(
labels=[
"Obese\n(BMI > 30)",
"Overweight\n(BMI < 30)",
"Healthy Weight\n(BMI < 25)",
"Underweight\n(BMI < 18.5)",
],
fontdict={
"font": "Serif",
"color": "#50312F",
"fontsize": 16,
"fontweight": "bold",
"color": "black",
},
)
ax1.text(
-750,
-1.5,
"BMI Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
ax1.text(
1030,
-0.4,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(1300, -0.4, "|", {"color": "black", "size": "16", "weight": "bold"})
ax1.text(
1340,
-0.4,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-750,
-0.8,
"There are more strokes for people who are obese or overweight;\nhowever, people who are at a healthy weight are not safe.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax1.text(
pos_df.values[0] * c - 230,
0.05,
pos_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[0] * c + 140,
0.05,
neg_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[1] * c - 230,
1.05,
pos_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[1] * c + 140,
1.05,
neg_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[2] * c - 230,
2.05,
pos_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[2] * c + 140,
2.05,
neg_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[3] * c - 230,
3.05,
pos_df.values[3],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[3] * c + 140,
3.05,
neg_df.values[3],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
# distribution plots ---- only single variable
sns.kdeplot(
data=df,
x="BMI",
ax=ax2,
shade=True,
color="#2B2E4A",
alpha=1,
)
ax2.set_xlabel(
"Body mass index of a person",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-17,
0.095,
"Overall BMI Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-17,
0.085,
"BMI is highly right-skewed, and average BMI is around 30.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
63,
0.06,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
71.5, 0.06, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
74.5,
0.06,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
88.7, 0.06, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
92.5,
0.06,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="BMI",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="BMI",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Body mass index of a person",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-15,
0.095,
"BMI-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-15,
0.085,
"There is a peak in strokes at around 30 BMI",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
66,
0.06,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(82, 0.06, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
84,
0.06,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and BMI",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
# ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
# ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(False)
# dumbbell plot of stoke and healthy people
sns.boxplot(y=df["Stroke"], x=df["PhysicalHealth"], ax=ax1, palette=palette)
ax1.set_xlabel(
"Days being ill or injured",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax1.text(
-5,
-0.75,
"Physical Health Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
# ax1.text(26.3,-0.4, 'Strokes', {'font': 'Serif','weight':'bold','size': '16','weight':'bold','style':'normal', 'color':'#E84545'})
# ax1.text(30.2,-0.4, '|', {'color':'black' , 'size':'16', 'weight': 'bold'})
# ax1.text(30.6,-0.4, 'Healthy', {'font': 'Serif','weight':'bold', 'size': '16','style':'normal', 'weight':'bold','color':'#2B2E4A'})
ax1.text(
-5,
1,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(
-5,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-5,
-0.5,
"People who are rarely ill or injured are less likely to have strokes,\
while if you \nare ill or injured for more than 5 days within 30 days you are more likely to get a stroke",
{"font": "Serif", "size": "16", "color": "black"},
)
# distribution plots ---- only single variable
sns.kdeplot(data=df, x="PhysicalHealth", ax=ax2, shade=True, color="#2B2E4A", alpha=1)
ax2.set_xlabel(
"Days being ill or injured",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-15,
0.6,
"Overall Physical Health Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-15,
0.48,
"Physical Health is highly right-skewed, and it seems that most \nof the participants were not injured or ill for the past 30 days",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
25,
0.4,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
28.5, 0.4, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
30,
0.4,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
35.5, 0.4, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
37,
0.4,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="PhysicalHealth",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="PhysicalHealth",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Days being ill or injured",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-20,
0.6,
"Physical Health-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-20,
0.5,
"There is a peak in strokes at around 0 and 30 days of being injured or ill",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
28,
0.4,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(35.8, 0.4, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
37,
0.4,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and Physical Health",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
# ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
# ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(False)
# dumbbell plot of stoke and healthy people
sns.boxplot(y=df["Stroke"], x=df["MentalHealth"], ax=ax1, palette=palette)
ax1.set_xlabel(
"Days of bad mental health",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax1.text(
-5,
-0.75,
"Mental Health Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
# ax1.text(26.3,-0.4, 'Strokes', {'font': 'Serif','weight':'bold','size': '16','weight':'bold','style':'normal', 'color':'#E84545'})
# ax1.text(30.2,-0.4, '|', {'color':'black' , 'size':'16', 'weight': 'bold'})
# ax1.text(30.6,-0.4, 'Healthy', {'font': 'Serif','weight':'bold', 'size': '16','style':'normal', 'weight':'bold','color':'#2B2E4A'})
ax1.text(
-5,
1,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(
-5,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-5,
-0.5,
"People who have good mental health are less likely to have a stroke, but the chances of having a \n\
stroke increase if bad mental health persists for more than seven days.",
{"font": "Serif", "size": "16", "color": "black"},
)
# distribution plots ---- only single variable
sns.kdeplot(data=df, x="MentalHealth", ax=ax2, shade=True, color="#2B2E4A", alpha=1)
ax2.set_xlabel(
"Days of bad mental health",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-15,
0.6,
"Overall Mental Health Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-15,
0.48,
"Mental Health is highly right-skewed, and it seems that most \nof the participants had good mental health for the past 30 days.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
25,
0.4,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
28.5, 0.4, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
30,
0.4,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
35.5, 0.4, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
37,
0.4,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="MentalHealth",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="MentalHealth",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Days of bad mental health",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-20,
0.52,
"Mental Health-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-20,
0.45,
"Majority of people who have strokes have good mental health.",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
28,
0.4,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(35.8, 0.4, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
37,
0.4,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and MentalHealth",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 13:]) # dumbbell plot
# axes list
axes = [ax1, ax2, ax3]
# setting of axes; visibility of axes and spines turn off
for ax in axes:
# ax.axes.get_yaxis().set_visible(False)
ax.set_facecolor("#f6f5f5")
for loc in ["left", "right", "top", "bottom"]:
ax.spines[loc].set_visible(False)
fig.patch.set_facecolor("#f6f5f5")
ax1.axes.get_xaxis().set_visible(False)
# ax1.axes.get_yaxis().set_visible(False)
ax1.set_xlim(xmin=-250, xmax=1800)
# dumbbell plot of stoke and healthy people
def func(time):
if time < 6:
return "Sleep Deprived"
elif time < 9:
return "Healthy Sleep"
elif time >= 9:
return "Excessive Sleeping"
pos_df = (
df[df["Stroke"] == "Yes"]
.SleepTime.map(func)
.value_counts()
.reindex(index=["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"])
)
neg_df = (
df[df["Stroke"] == "No"]
.SleepTime.map(func)
.value_counts()
.reindex(index=["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"])
)
if pos_df.values.max() > neg_df.values.max():
c = 1600 / pos_df.values.max()
else:
c = 1600 / neg_df.values.max()
xmin = []
xmax = []
for i in ["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"]:
if pos_df[i] > neg_df[i]:
xmax.append(pos_df[i] * c)
xmin.append(neg_df[i] * c)
xmax.append(neg_df[i] * c)
xmin.append(pos_df[i] * c)
ax1.hlines(
y=["Excessive Sleeping", "Healthy Sleep", "Sleep Deprived"],
xmin=xmin,
xmax=xmax,
color="grey",
**{"linewidth": 0.5},
)
sns.scatterplot(
y=pos_df.index,
x=pos_df.values * c,
s=pos_df.values * 0.03,
color="#E84545",
ax=ax1,
alpha=1,
)
sns.scatterplot(
y=neg_df.index,
x=neg_df.values * c,
s=neg_df.values * 0.03,
color="#2B2E4A",
ax=ax1,
alpha=1,
)
ax1.set_yticklabels(
labels=[
"Excessive Sleeping\n(9h or more)",
"Healthy Sleep\n(7h to 8h)",
"Sleep Deprived\n(less than 6h)",
],
fontdict={
"font": "Serif",
"color": "#50312F",
"fontsize": 16,
"fontweight": "bold",
"color": "black",
},
)
ax1.text(
-400,
-0.55,
"Sleep Time Impact on Strokes?",
{"font": "Serif", "size": "25", "weight": "bold", "color": "black"},
)
ax1.text(
1750,
0,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax1.text(2035, 0, "|", {"color": "black", "size": "16", "weight": "bold"})
ax1.text(
2070,
0,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
ax1.text(
-400,
-0.3,
"There are more people who get strokes that get a healthy amount of sleep.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax1.text(
pos_df.values[0] * c - 230,
0.05,
pos_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[0] * c + 140,
0.05,
neg_df.values[0],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[1] * c - 230,
1.05,
pos_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[1] * c + 140,
1.05,
neg_df.values[1],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
ax1.text(
pos_df.values[2] * c - 230,
2.05,
pos_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#E84545"},
)
ax1.text(
neg_df.values[2] * c + 140,
2.05,
neg_df.values[2],
{"font": "Serif", "size": 14, "weight": "bold", "color": "#2B2E4A"},
)
# distribution plots ---- only single variable
sns.kdeplot(data=df, x="SleepTime", ax=ax2, shade=True, color="#2B2E4A", alpha=1)
ax2.set_xlabel(
"Hours Slept on Average",
fontdict={"font": "Serif", "color": "black", "size": 16, "weight": "bold"},
)
ax2.text(
-8,
1.4,
"Overall Sleep Time Distribution",
{"font": "Serif", "color": "black", "weight": "bold", "size": 24},
)
ax2.text(
-8,
1.13,
"Most people get 6 to 8 hours of sleep.",
{"font": "Serif", "size": "16", "color": "black"},
)
ax2.text(
20,
0.8,
"Total",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
ax2.text(
22.6, 0.8, "=", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
23.5,
0.8,
"Strokes",
{"font": "Serif", "size": "14", "color": "#E84545", "weight": "bold"},
)
ax2.text(
27.5, 0.8, "+", {"font": "Serif", "size": "14", "color": "black", "weight": "bold"}
)
ax2.text(
28.5,
0.8,
"Healthy",
{"font": "Serif", "size": "14", "color": "#2B2E4A", "weight": "bold"},
)
# distribution plots with hue of strokes
sns.kdeplot(
data=df[df["Stroke"] == "Yes"],
x="SleepTime",
ax=ax3,
shade=True,
alpha=1,
color="#E84545",
)
sns.kdeplot(
data=df[df["Stroke"] == "No"],
x="SleepTime",
ax=ax3,
shade=True,
alpha=0.8,
color="#2B2E4A",
)
ax3.set_xlabel(
"Hours Slept on Average",
fontdict={"font": "Serif", "color": "black", "weight": "bold", "size": 16},
)
ax3.text(
-9,
1.35,
"Sleep Time-Strokes Distribution",
{"font": "Serif", "weight": "bold", "color": "black", "size": 24},
)
ax3.text(
-9,
1.18,
"Majority of people who have strokes get 6 to 8 hours of sleep.",
{"font": "Serif", "color": "black", "size": 16},
)
ax3.text(
20,
0.8,
"Strokes",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#E84545",
},
)
ax3.text(24.6, 0.8, "|", {"color": "black", "size": "16", "weight": "bold"})
ax3.text(
25.2,
0.8,
"Healthy",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#2B2E4A",
},
)
fig.text(
0.35,
0.925,
"Strokes and Sleep Time",
{"font": "Serif", "weight": "bold", "color": "black", "size": 35},
)
fig.show()
#
# Categorical Data
#
from pywaffle import Waffle
stroke_gen = df[df["Stroke"] == "Yes"]["Sex"].value_counts()
healthy_gen = df[df["Stroke"] == "No"]["Sex"].value_counts()
female = df["Sex"].value_counts().values[0]
male = df["Sex"].value_counts().values[1]
stroke_female = int(round(stroke_gen.values[0] / female * 100, 0))
stroke_male = int(round(stroke_gen.values[1] / male * 100, 0))
healthy_female = int(round(healthy_gen.values[0] / female * 100, 0))
healthy_male = int(round(healthy_gen.values[1] / male * 100, 0))
female_per = int(round(female / (female + male) * 100, 0))
male_per = int(round(male / (female + male) * 100, 0))
fig = plt.figure(
FigureClass=Waffle,
constrained_layout=True,
figsize=(6, 6),
facecolor="#f6f5f5",
dpi=100,
plots={
121: {
"rows": 7,
"columns": 7,
"values": [healthy_male, stroke_male],
"colors": ["#512b58", "#fe346e"],
"vertical": True,
"interval_ratio_y": 0.1,
"interval_ratio_x": 0.1,
"icons": "male",
"icon_legend": False,
"icon_size": 20,
"plot_anchor": "C",
"alpha": 0.1,
},
122: {
"rows": 7,
"columns": 7,
"values": [healthy_female, stroke_female],
"colors": ["#512b58", "#fe346e"],
"vertical": True,
"interval_ratio_y": 0.1,
"interval_ratio_x": 0.1,
"icons": "female",
"icon_legend": False,
"icon_size": 20,
"plot_anchor": "C",
"alpha": 0.1,
},
},
)
fig.text(
0.0,
0.8,
"Gender Risk for Stroke - effect of gender on strokes?",
{"font": "Serif", "size": 20, "color": "black", "weight": "bold"},
)
fig.text(
0.0,
0.73,
"Risk of stroke in both male and female are same,\nprove our initial assumption is wrong. ",
{"font": "Serif", "size": 13, "color": "black", "weight": "normal"},
alpha=0.7,
)
fig.text(
0.24,
0.22,
"ooo",
{"font": "Serif", "size": 16, "weight": "bold", "color": "#f6f5f5"},
)
fig.text(
0.65,
0.22,
"ooo",
{"font": "Serif", "size": 16, "weight": "bold", "color": "#f6f5f5"},
)
fig.text(
0.23,
0.28,
"{}%".format(healthy_male),
{"font": "Serif", "size": 20, "weight": "bold", "color": "#512b58"},
alpha=1,
)
fig.text(
0.65,
0.28,
"{}%".format(healthy_female),
{"font": "Serif", "size": 20, "weight": "bold", "color": "#512b58"},
alpha=1,
)
fig.text(
0.21,
0.67,
"Male ({}%)".format(male_per),
{"font": "Serif", "size": 14, "weight": "bold", "color": "black"},
alpha=0.5,
)
fig.text(
0.61,
0.67,
"Female({}%)".format(female_per),
{"font": "Serif", "size": 14, "weight": "bold", "color": "black"},
alpha=0.5,
)
# fig.text(0., 0.8, 'Assumption was proven wrong', {'font':'Serif', 'size':24, 'color':'black', 'weight':'bold'})
fig.text(
0.9,
0.73,
"Stroke ",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"weight": "bold",
"style": "normal",
"color": "#fe346e",
},
)
fig.text(1.02, 0.73, "|", {"color": "black", "size": "16", "weight": "bold"})
fig.text(
1.035,
0.73,
"No Stroke",
{
"font": "Serif",
"weight": "bold",
"size": "16",
"style": "normal",
"weight": "bold",
"color": "#512b58",
},
alpha=1,
)
fig.show()
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[2:10, 3:10]) # distribution plot
# ax3 = fig.add_subplot(gs[2:10, 10:17]) #hue distribution plot
ax1 = fig.add_subplot(gs[2:10, 17:24])
Waffle.make_waffle(
ax=ax3, # pass axis to make_waffle
rows=7,
columns=7,
values=[healthy_female, stroke_female],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="female",
font_size=44,
colors=[],
)
Waffle.make_waffle(
ax=ax2, # pass axis to make_waffle
rows=7,
columns=7,
values=[healthy_male, stroke_male],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="male",
font_size=44,
)
fig = plt.figure(figsize=(24, 10), dpi=60)
gs = fig.add_gridspec(10, 24)
gs.update(wspace=1, hspace=0.05)
ax2 = fig.add_subplot(gs[1:4, 0:8]) # distribution plot
ax3 = fig.add_subplot(gs[6:9, 0:8]) # hue distribution plot
ax1 = fig.add_subplot(gs[2:9, 20:])
fig1 = plt.figure(
FigureClass=Waffle,
figsize=(3, 3),
rows=7,
columns=7,
values=[healthy_male, stroke_male],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="male",
)
fig2 = plt.figure(
FigureClass=Waffle,
figsize=(3, 3),
rows=7,
columns=7,
values=[healthy_male, stroke_male],
vertical=True,
interval_ratio_y=0.1,
interval_ratio_x=0.1,
icons="male",
)
|
[{"personal-key-indicators-of-heart-disease/heart_2020_cleaned.csv": {"column_names": "[\"HeartDisease\", \"BMI\", \"Smoking\", \"AlcoholDrinking\", \"Stroke\", \"PhysicalHealth\", \"MentalHealth\", \"DiffWalking\", \"Sex\", \"AgeCategory\", \"Race\", \"Diabetic\", \"PhysicalActivity\", \"GenHealth\", \"SleepTime\", \"Asthma\", \"KidneyDisease\", \"SkinCancer\"]", "column_data_types": "{\"HeartDisease\": \"object\", \"BMI\": \"float64\", \"Smoking\": \"object\", \"AlcoholDrinking\": \"object\", \"Stroke\": \"object\", \"PhysicalHealth\": \"float64\", \"MentalHealth\": \"float64\", \"DiffWalking\": \"object\", \"Sex\": \"object\", \"AgeCategory\": \"object\", \"Race\": \"object\", \"Diabetic\": \"object\", \"PhysicalActivity\": \"object\", \"GenHealth\": \"object\", \"SleepTime\": \"float64\", \"Asthma\": \"object\", \"KidneyDisease\": \"object\", \"SkinCancer\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 319795 entries, 0 to 319794\nData columns (total 18 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 HeartDisease 319795 non-null object \n 1 BMI 319795 non-null float64\n 2 Smoking 319795 non-null object \n 3 AlcoholDrinking 319795 non-null object \n 4 Stroke 319795 non-null object \n 5 PhysicalHealth 319795 non-null float64\n 6 MentalHealth 319795 non-null float64\n 7 DiffWalking 319795 non-null object \n 8 Sex 319795 non-null object \n 9 AgeCategory 319795 non-null object \n 10 Race 319795 non-null object \n 11 Diabetic 319795 non-null object \n 12 PhysicalActivity 319795 non-null object \n 13 GenHealth 319795 non-null object \n 14 SleepTime 319795 non-null float64\n 15 Asthma 319795 non-null object \n 16 KidneyDisease 319795 non-null object \n 17 SkinCancer 319795 non-null object \ndtypes: float64(4), object(14)\nmemory usage: 43.9+ MB\n", "summary": "{\"BMI\": {\"count\": 319795.0, \"mean\": 28.325398520927465, \"std\": 6.356100200470739, \"min\": 12.02, \"25%\": 24.03, \"50%\": 27.34, \"75%\": 31.42, \"max\": 94.85}, \"PhysicalHealth\": {\"count\": 319795.0, \"mean\": 3.3717100017198516, \"std\": 7.950850182571368, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 2.0, \"max\": 30.0}, \"MentalHealth\": {\"count\": 319795.0, \"mean\": 3.898366140808956, \"std\": 7.955235218943607, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 3.0, \"max\": 30.0}, \"SleepTime\": {\"count\": 319795.0, \"mean\": 7.097074688472302, \"std\": 1.4360070609642825, \"min\": 1.0, \"25%\": 6.0, \"50%\": 7.0, \"75%\": 8.0, \"max\": 24.0}}", "examples": "{\"HeartDisease\":{\"0\":\"No\",\"1\":\"No\",\"2\":\"No\",\"3\":\"No\"},\"BMI\":{\"0\":16.6,\"1\":20.34,\"2\":26.58,\"3\":24.21},\"Smoking\":{\"0\":\"Yes\",\"1\":\"No\",\"2\":\"Yes\",\"3\":\"No\"},\"AlcoholDrinking\":{\"0\":\"No\",\"1\":\"No\",\"2\":\"No\",\"3\":\"No\"},\"Stroke\":{\"0\":\"No\",\"1\":\"Yes\",\"2\":\"No\",\"3\":\"No\"},\"PhysicalHealth\":{\"0\":3.0,\"1\":0.0,\"2\":20.0,\"3\":0.0},\"MentalHealth\":{\"0\":30.0,\"1\":0.0,\"2\":30.0,\"3\":0.0},\"DiffWalking\":{\"0\":\"No\",\"1\":\"No\",\"2\":\"No\",\"3\":\"No\"},\"Sex\":{\"0\":\"Female\",\"1\":\"Female\",\"2\":\"Male\",\"3\":\"Female\"},\"AgeCategory\":{\"0\":\"55-59\",\"1\":\"80 or older\",\"2\":\"65-69\",\"3\":\"75-79\"},\"Race\":{\"0\":\"White\",\"1\":\"White\",\"2\":\"White\",\"3\":\"White\"},\"Diabetic\":{\"0\":\"Yes\",\"1\":\"No\",\"2\":\"Yes\",\"3\":\"No\"},\"PhysicalActivity\":{\"0\":\"Yes\",\"1\":\"Yes\",\"2\":\"Yes\",\"3\":\"No\"},\"GenHealth\":{\"0\":\"Very good\",\"1\":\"Very good\",\"2\":\"Fair\",\"3\":\"Good\"},\"SleepTime\":{\"0\":5.0,\"1\":7.0,\"2\":8.0,\"3\":6.0},\"Asthma\":{\"0\":\"Yes\",\"1\":\"No\",\"2\":\"Yes\",\"3\":\"No\"},\"KidneyDisease\":{\"0\":\"No\",\"1\":\"No\",\"2\":\"No\",\"3\":\"No\"},\"SkinCancer\":{\"0\":\"Yes\",\"1\":\"No\",\"2\":\"No\",\"3\":\"Yes\"}}"}}]
| true | 1 |
<start_data_description><data_path>personal-key-indicators-of-heart-disease/heart_2020_cleaned.csv:
<column_names>
['HeartDisease', 'BMI', 'Smoking', 'AlcoholDrinking', 'Stroke', 'PhysicalHealth', 'MentalHealth', 'DiffWalking', 'Sex', 'AgeCategory', 'Race', 'Diabetic', 'PhysicalActivity', 'GenHealth', 'SleepTime', 'Asthma', 'KidneyDisease', 'SkinCancer']
<column_types>
{'HeartDisease': 'object', 'BMI': 'float64', 'Smoking': 'object', 'AlcoholDrinking': 'object', 'Stroke': 'object', 'PhysicalHealth': 'float64', 'MentalHealth': 'float64', 'DiffWalking': 'object', 'Sex': 'object', 'AgeCategory': 'object', 'Race': 'object', 'Diabetic': 'object', 'PhysicalActivity': 'object', 'GenHealth': 'object', 'SleepTime': 'float64', 'Asthma': 'object', 'KidneyDisease': 'object', 'SkinCancer': 'object'}
<dataframe_Summary>
{'BMI': {'count': 319795.0, 'mean': 28.325398520927465, 'std': 6.356100200470739, 'min': 12.02, '25%': 24.03, '50%': 27.34, '75%': 31.42, 'max': 94.85}, 'PhysicalHealth': {'count': 319795.0, 'mean': 3.3717100017198516, 'std': 7.950850182571368, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 2.0, 'max': 30.0}, 'MentalHealth': {'count': 319795.0, 'mean': 3.898366140808956, 'std': 7.955235218943607, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 3.0, 'max': 30.0}, 'SleepTime': {'count': 319795.0, 'mean': 7.097074688472302, 'std': 1.4360070609642825, 'min': 1.0, '25%': 6.0, '50%': 7.0, '75%': 8.0, 'max': 24.0}}
<dataframe_info>
RangeIndex: 319795 entries, 0 to 319794
Data columns (total 18 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 HeartDisease 319795 non-null object
1 BMI 319795 non-null float64
2 Smoking 319795 non-null object
3 AlcoholDrinking 319795 non-null object
4 Stroke 319795 non-null object
5 PhysicalHealth 319795 non-null float64
6 MentalHealth 319795 non-null float64
7 DiffWalking 319795 non-null object
8 Sex 319795 non-null object
9 AgeCategory 319795 non-null object
10 Race 319795 non-null object
11 Diabetic 319795 non-null object
12 PhysicalActivity 319795 non-null object
13 GenHealth 319795 non-null object
14 SleepTime 319795 non-null float64
15 Asthma 319795 non-null object
16 KidneyDisease 319795 non-null object
17 SkinCancer 319795 non-null object
dtypes: float64(4), object(14)
memory usage: 43.9+ MB
<some_examples>
{'HeartDisease': {'0': 'No', '1': 'No', '2': 'No', '3': 'No'}, 'BMI': {'0': 16.6, '1': 20.34, '2': 26.58, '3': 24.21}, 'Smoking': {'0': 'Yes', '1': 'No', '2': 'Yes', '3': 'No'}, 'AlcoholDrinking': {'0': 'No', '1': 'No', '2': 'No', '3': 'No'}, 'Stroke': {'0': 'No', '1': 'Yes', '2': 'No', '3': 'No'}, 'PhysicalHealth': {'0': 3.0, '1': 0.0, '2': 20.0, '3': 0.0}, 'MentalHealth': {'0': 30.0, '1': 0.0, '2': 30.0, '3': 0.0}, 'DiffWalking': {'0': 'No', '1': 'No', '2': 'No', '3': 'No'}, 'Sex': {'0': 'Female', '1': 'Female', '2': 'Male', '3': 'Female'}, 'AgeCategory': {'0': '55-59', '1': '80 or older', '2': '65-69', '3': '75-79'}, 'Race': {'0': 'White', '1': 'White', '2': 'White', '3': 'White'}, 'Diabetic': {'0': 'Yes', '1': 'No', '2': 'Yes', '3': 'No'}, 'PhysicalActivity': {'0': 'Yes', '1': 'Yes', '2': 'Yes', '3': 'No'}, 'GenHealth': {'0': 'Very good', '1': 'Very good', '2': 'Fair', '3': 'Good'}, 'SleepTime': {'0': 5.0, '1': 7.0, '2': 8.0, '3': 6.0}, 'Asthma': {'0': 'Yes', '1': 'No', '2': 'Yes', '3': 'No'}, 'KidneyDisease': {'0': 'No', '1': 'No', '2': 'No', '3': 'No'}, 'SkinCancer': {'0': 'Yes', '1': 'No', '2': 'No', '3': 'Yes'}}
<end_description>
| 12,401 | 0 | 14,385 | 12,401 |
129453748
|
<jupyter_start><jupyter_text>Smoking Dataset from UK
```
Survey data on smoking habits from the United Kingdom. The data set can be used for analyzing the demographic characteristics of smokers and types of tobacco consumed. A data frame with 1691 observations on the following 12 variables.
```
| Column | Description |
| --- | --- |
| gender | Gender with levels Female and Male. |
| age | Age. |
| marital_status | Marital status with levels Divorced, Married, Separated, Single and Widowed. |
| highest_qualification | Highest education level with levels A Levels, Degree, GCSE/CSE, GCSE/O Level, Higher/Sub Degree, No Qualification, ONC/BTEC and Other/Sub Degree |
| nationality | Nationality with levels British, English, Irish, Scottish, Welsh, Other, Refused and Unknown. |
| ethnicity | Ethnicity with levels Asian, Black, Chinese, Mixed, White and Refused Unknown. |
| gross_income | Gross income with levels Under 2,600, 2,600 to 5,200, 5,200 to 10,400, 10,400 to 15,600, 15,600 to 20,800, 20,800 to 28,600, 28,600 to 36,400, Above 36,400, Refused and Unknown. |
| region | Region with levels London, Midlands & East Anglia, Scotland, South East, South West, The North and Wales |
| smoke | Smoking status with levels No and Yes |
| amt_weekends | Number of cigarettes smoked per day on weekends. |
| amt_weekdays | Number of cigarettes smoked per day on weekdays. |
| type | Type of cigarettes smoked with levels Packets, Hand-Rolled, Both/Mainly Packets and Both/Mainly Hand-Rolled
|
# Source
National STEM Centre, Large Datasets from stats4schools, https://www.stem.org.uk/resources/elibrary/resource/28452/large-datasets-stats4schools.
Kaggle dataset identifier: smoking-dataset-from-uk
<jupyter_script># # Introduction
# **According to the article "What is a Chi-Square Test? Formula, Examples & Application" by Avijeet Biswa (https://www.simplilearn.com/tutorials/statistics-tutorial/chi-square-test)**
# What Are Categorical Variables?
# Categorical variables belong to a subset of variables that can be divided into discrete categories. Names or labels are the most common categories. These variables are also known as qualitative variables because they depict the variable's quality or characteristics.
# Categorical variables can be divided into two categories:
# 1. Nominal Variable: A nominal variable's categories have no natural ordering. Example: Gender, Blood groups
# 2. Ordinal Variable: A variable that allows the categories to be sorted is ordinal variables. Customer satisfaction (Excellent, Very Good, Good, Average, Bad, and so on) is an example.
# What Is a Chi-Square Test?
# The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# 
# **In this article, I tried Chi-Squre Test between 'smoke' and some categorial variables.**
# reference:
# "What is a Chi-Square Test? Formula, Examples & Application" by Avijeet Biswa (https://www.simplilearn.com/tutorials/statistics-tutorial/chi-square-test)
# # Importing
import scipy.stats as stats # It has all the probability distributions available along with many statistical functions.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
sns.set(style="darkgrid")
from scipy.stats import chi2, chi2_contingency
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# Reading the csv
data_path = "../input/smoking-dataset-from-uk/smoking.csv"
df = pd.read_csv(data_path)
smoking = df.copy()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# inspect data, print top 5
smoking.head(5)
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# bottom 5 rows:
smoking.tail(5)
# # EDA
# I refered "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing') for EDA.
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# get the size of dataframe
print("Rows : ", smoking.shape[0])
print("Columns : ", smoking.shape[1])
print("\nFeatures : \n", smoking.columns.tolist())
print("\nMissing values : ", smoking.isnull().sum().values.sum())
print("\nUnique values : \n", smoking.nunique())
smoking = smoking.drop("Unnamed: 0", axis=1)
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
smoking.info()
# Types of variables
# * Categorical varibles - 'gender', 'marital_status', 'highest_qualification','nationality', 'ethnicity', 'gross_income', 'region', 'smoke','type'
# * Quantitative variables -'age','amt_weekends','amt_weekdays'
# There are 421 missing values in 'amt_weekends', 'amt_weekdays' and 'type'.
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# changing object dtype to category to save memory
smoking.gender = smoking["gender"].astype("category")
smoking.marital_status = smoking["marital_status"].astype("category")
smoking.highest_qualification = smoking["highest_qualification"].astype("category")
smoking.nationality = smoking["nationality"].astype("category")
smoking.ethnicity = smoking["ethnicity"].astype("category")
smoking.gross_income = smoking["gross_income"].astype("category")
smoking.region = smoking["region"].astype("category")
smoking.smoke = smoking["smoke"].astype("category")
smoking.type = smoking["type"].astype("category")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
smoking.describe()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
smoking.describe(include="category")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
list_col = smoking.select_dtypes(["category"]).columns
for i in range(len(list_col)):
print(smoking[list_col[i]].value_counts())
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
def dist_box(data):
# function plots a combined graph for univariate analysis of continous variable
# to check spread, central tendency , dispersion and outliers
Name = data.name.upper()
fig, (ax_box, ax_dis) = plt.subplots(
2, 1, gridspec_kw={"height_ratios": (0.25, 0.75)}, figsize=(8, 5)
)
mean = data.mean()
median = data.median()
mode = data.mode().tolist()[0]
fig.suptitle("SPREAD OF DATA FOR " + Name, fontsize=18, fontweight="bold")
sns.boxplot(x=data, showmeans=True, orient="h", color="violet", ax=ax_box)
ax_box.set(xlabel="")
sns.distplot(data, kde=False, color="blue", ax=ax_dis)
ax_dis.axvline(mean, color="r", linestyle="--", linewidth=2)
ax_dis.axvline(median, color="g", linestyle="-", linewidth=2)
ax_dis.axvline(mode, color="y", linestyle="-", linewidth=2)
plt.legend({"Mean": mean, "Median": median, "Mode": mode})
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# select all quantitative columns for checking the spread
list_col = smoking.select_dtypes([np.number]).columns
for i in range(len(list_col)):
dist_box(smoking[list_col[i]])
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# Function to create barplots that indicate percentage for each category.
def bar_perc(plot, feature):
total = len(feature) # length of the column
for p in plot.patches:
percentage = "{:.1f}%".format(
100 * p.get_height() / total
) # percentage of each class of the category
x = p.get_x() + p.get_width() / 2 - 0.05 # width of the plot
y = p.get_y() + p.get_height() # hieght of the plot
plot.annotate(percentage, (x, y), size=12) # annotate the percentage
list_col = [
"gender",
"marital_status",
"highest_qualification",
"nationality",
"ethnicity",
"gross_income",
"region",
"smoke",
"type",
]
plt.figure(figsize=(25, 20))
for i in range(len(list_col)):
plt.subplot(3, 3, i + 1)
plt.title(list_col[i])
sns.histplot(data=smoking, x=smoking[list_col[i]])
sns.set(font_scale=1)
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
plt.figure(figsize=(5, 3))
sns.heatmap(smoking.corr(), annot=True, cmap="YlGn")
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
sns.pairplot(data=smoking, corner=True)
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# Gender vs all numerical variable
fig1, axes1 = plt.subplots(2, 2, figsize=(10, 8))
# select all quantitative columns for checking the spread
list_col = smoking.select_dtypes([np.number]).columns
for i in range(len(list_col)):
row = i // 2
col = i % 2
ax = axes1[row, col]
sns.boxplot(
y=smoking[list_col[i]], x=smoking["gender"], ax=ax, palette="PuBu", orient="v"
).set(title="Gender VS " + list_col[i].upper())
plt.tight_layout()
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs Sex
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="gender", data=smoking, palette="rainbow")
bar_perc(ax, smoking["gender"])
ax.set(title="Smoker vs Gender")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs marital_status
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="marital_status", data=smoking, palette="rainbow")
bar_perc(ax, smoking["marital_status"])
ax.set(title="Smoker vs marital_status")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs highest_qualification
plt.figure(figsize=(13, 5))
ax = sns.countplot(
x="smoke", hue="highest_qualification", data=smoking, palette="rainbow"
)
bar_perc(ax, smoking["highest_qualification"])
ax.set(title="Smoker vs highest_qualification")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs nationality
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="nationality", data=smoking, palette="rainbow")
bar_perc(ax, smoking["nationality"])
ax.set(title="Smoker vs nationality")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs ethnicity
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="ethnicity", data=smoking, palette="rainbow")
bar_perc(ax, smoking["ethnicity"])
ax.set(title="Smoker vs ethnicity")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs gross_income
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="gross_income", data=smoking, palette="rainbow")
bar_perc(ax, smoking["gross_income"])
ax.set(title="Smoker vs gross_income")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs region
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="region", data=smoking, palette="rainbow")
bar_perc(ax, smoking["region"])
ax.set(title="Smoker vs region")
# # Chi-Square Test
# I refered "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence) for coding.
# # 1.Prove (or disprove) that there are not any difference in smoking between Female and Male
# Step 1: Define null and alternative hypothesis
# * H0:𝜇1=𝜇2 There are not any difference in smoking between Female and Male
# * H1:μ1𝜇2 There are any difference in smoking between Female and Male
# Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.
# α = 0.05
# Step 3: Identify gender is norminal varialble. So The Chi-Squre test is one of the good options to prove. The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Contingency Table
contingency_table = pd.crosstab(smoking["gender"], smoking["smoke"])
print("contingency_table :-\n", contingency_table)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Observed Values
Observed_Values = contingency_table.values
print("Observed Values :-\n", Observed_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
b = stats.chi2_contingency(contingency_table)
Expected_Values = b[3]
print("Expected Values :-\n", Expected_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Degree of Freedom
no_of_rows = len(contingency_table.iloc[0:2, 0])
no_of_columns = len(contingency_table.iloc[0, 0:2])
df = (no_of_rows - 1) * (no_of_columns - 1)
print("Degree of Freedom:-", df)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Significance Level 5%
alpha = 0.05
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
chi_square = sum([(o - e) ** 2.0 / e for o, e in zip(Observed_Values, Expected_Values)])
chi_square_statistic = chi_square[0] + chi_square[1]
print("chi-square statistic:-", chi_square_statistic)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# critical_value
critical_value = chi2.ppf(q=1 - alpha, df=df)
print("critical_value:", critical_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# p-value
p_value = 1 - chi2.cdf(x=chi_square_statistic, df=df)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
print("Significance level: ", alpha)
print("Degree of Freedom: ", df)
print("chi-square statistic:", chi_square_statistic)
print("critical_value:", critical_value)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# compare chi_square_statistic with critical_value and p-value which is the probability of getting chi-square>0.09 (chi_square_statistic)
if chi_square_statistic >= critical_value:
print("Reject H0,there are not any difference in smoking between Female and Male")
else:
print("Retain H0,there are not any difference in smoking between Female and Male")
if p_value <= alpha:
print("Reject H0,there are not any difference in smoking between Female and Male")
else:
print("Retain H0,there are not any difference in smoking between Female and Male")
# # 2.Prove (or disprove) that there are not any difference in income between smoker and non-smoker
# Step 1: Define null and alternative hypothesis
# * H0:𝜇1=𝜇2 There are not any difference in income between smoker and non-smoker
# * H1:μ1𝜇2 There are any difference in income between smoker and non-smoker
# Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.
# α = 0.05
# Step 3: Identify income is ordinal varialbles. So The Chi-Squre test is one of the good options to prove. The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Contingency Table
contingency_table = pd.crosstab(smoking["gross_income"], smoking["smoke"])
print("contingency_table :-\n", contingency_table)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Observed Values
Observed_Values = contingency_table.values
print("Observed Values :-\n", Observed_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
b = stats.chi2_contingency(contingency_table)
Expected_Values = b[3]
print("Expected Values :-\n", Expected_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Degree of Freedom
no_of_rows = len(contingency_table.iloc[0:11, 0])
no_of_columns = len(contingency_table.iloc[0, 0:2])
df = (no_of_rows - 1) * (no_of_columns - 1)
print("Degree of Freedom:-", df)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Significance Level 5%
alpha = 0.05
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
chi_square = sum([(o - e) ** 2.0 / e for o, e in zip(Observed_Values, Expected_Values)])
chi_square_statistic = chi_square[0] + chi_square[1]
print("chi-square statistic:-", chi_square_statistic)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# critical_value
critical_value = chi2.ppf(q=1 - alpha, df=df)
print("critical_value:", critical_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# p-value
p_value = 1 - chi2.cdf(x=chi_square_statistic, df=df)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
print("Significance level: ", alpha)
print("Degree of Freedom: ", df)
print("chi-square statistic:", chi_square_statistic)
print("critical_value:", critical_value)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# compare chi_square_statistic with critical_value and p-value which is the probability of getting chi-square>0.09 (chi_square_statistic)
if chi_square_statistic >= critical_value:
print(
"Reject H0,there are not any difference in income between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in income between smoker and non-smoker"
)
if p_value <= alpha:
print(
"Reject H0,there are not any difference in income between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in income between smoker and non-smoker"
)
# # 3.Prove (or disprove) that there are not any difference in highest_qualification between smoker and non-smoker
# Step 1: Define null and alternative hypothesis
# * H0:𝜇1=𝜇2 There are not any difference in highest_qualification between smoker and non-smoker
# * H1:μ1𝜇2 There are any difference in highest_qualification between smoker and non-smoker
# Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.
# α = 0.05
# Step 3: Identify highest_qualification is ordinal varialble. So The Chi-Squre test is one of the good options to prove. The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Contingency Table
contingency_table = pd.crosstab(smoking["highest_qualification"], smoking["smoke"])
print("contingency_table :-\n", contingency_table)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Observed Values
Observed_Values = contingency_table.values
print("Observed Values :-\n", Observed_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
b = stats.chi2_contingency(contingency_table)
Expected_Values = b[3]
print("Expected Values :-\n", Expected_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Degree of Freedom
no_of_rows = len(contingency_table.iloc[0:9, 0])
no_of_columns = len(contingency_table.iloc[0, 0:2])
df = (no_of_rows - 1) * (no_of_columns - 1)
print("Degree of Freedom:-", df)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Significance Level 5%
alpha = 0.05
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
chi_square = sum([(o - e) ** 2.0 / e for o, e in zip(Observed_Values, Expected_Values)])
chi_square_statistic = chi_square[0] + chi_square[1]
print("chi-square statistic:-", chi_square_statistic)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# critical_value
critical_value = chi2.ppf(q=1 - alpha, df=df)
print("critical_value:", critical_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# p-value
p_value = 1 - chi2.cdf(x=chi_square_statistic, df=df)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
print("Significance level: ", alpha)
print("Degree of Freedom: ", df)
print("chi-square statistic:", chi_square_statistic)
print("critical_value:", critical_value)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# compare chi_square_statistic with critical_value and p-value which is the probability of getting chi-square>0.09 (chi_square_statistic)
if chi_square_statistic >= critical_value:
print(
"Reject H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
if p_value <= alpha:
print(
"Reject H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/453/129453748.ipynb
|
smoking-dataset-from-uk
|
utkarshx27
|
[{"Id": 129453748, "ScriptId": 38369641, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6437951, "CreationDate": "05/14/2023 01:01:18", "VersionNumber": 1.0, "Title": "Smoking Analysis| Chi-Square Test", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 442.0, "LinesInsertedFromPrevious": 442.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 21}]
|
[{"Id": 185519789, "KernelVersionId": 129453748, "SourceDatasetVersionId": 5651804}]
|
[{"Id": 5651804, "DatasetId": 3248690, "DatasourceVersionId": 5727175, "CreatorUserId": 13364933, "LicenseName": "CC0: Public Domain", "CreationDate": "05/10/2023 05:41:12", "VersionNumber": 1.0, "Title": "Smoking Dataset from UK", "Slug": "smoking-dataset-from-uk", "Subtitle": "Demographic Characteristics & Tobacco Consumption Habits: UK Smoking Survey Data", "Description": "``` \nSurvey data on smoking habits from the United Kingdom. The data set can be used for analyzing the demographic characteristics of smokers and types of tobacco consumed. A data frame with 1691 observations on the following 12 variables.\n```\n| Column | Description |\n| --- | --- |\n| gender | Gender with levels Female and Male. |\n| age | Age. |\n| marital_status | Marital status with levels Divorced, Married, Separated, Single and Widowed. |\n| highest_qualification | Highest education level with levels A Levels, Degree, GCSE/CSE, GCSE/O Level, Higher/Sub Degree, No Qualification, ONC/BTEC and Other/Sub Degree |\n| nationality | Nationality with levels British, English, Irish, Scottish, Welsh, Other, Refused and Unknown. |\n| ethnicity | Ethnicity with levels Asian, Black, Chinese, Mixed, White and Refused Unknown. |\n| gross_income | Gross income with levels Under 2,600, 2,600 to 5,200, 5,200 to 10,400, 10,400 to 15,600, 15,600 to 20,800, 20,800 to 28,600, 28,600 to 36,400, Above 36,400, Refused and Unknown. |\n| region | Region with levels London, Midlands & East Anglia, Scotland, South East, South West, The North and Wales |\n| smoke | Smoking status with levels No and Yes |\n| amt_weekends | Number of cigarettes smoked per day on weekends. |\n| amt_weekdays | Number of cigarettes smoked per day on weekdays. |\n| type | Type of cigarettes smoked with levels Packets, Hand-Rolled, Both/Mainly Packets and Both/Mainly Hand-Rolled\n |\n\n# Source\nNational STEM Centre, Large Datasets from stats4schools, https://www.stem.org.uk/resources/elibrary/resource/28452/large-datasets-stats4schools.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3248690, "CreatorUserId": 13364933, "OwnerUserId": 13364933.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5651804.0, "CurrentDatasourceVersionId": 5727175.0, "ForumId": 3314043, "Type": 2, "CreationDate": "05/10/2023 05:41:12", "LastActivityDate": "05/10/2023", "TotalViews": 14838, "TotalDownloads": 2967, "TotalVotes": 58, "TotalKernels": 10}]
|
[{"Id": 13364933, "UserName": "utkarshx27", "DisplayName": "Utkarsh Singh", "RegisterDate": "01/21/2023", "PerformanceTier": 2}]
|
# # Introduction
# **According to the article "What is a Chi-Square Test? Formula, Examples & Application" by Avijeet Biswa (https://www.simplilearn.com/tutorials/statistics-tutorial/chi-square-test)**
# What Are Categorical Variables?
# Categorical variables belong to a subset of variables that can be divided into discrete categories. Names or labels are the most common categories. These variables are also known as qualitative variables because they depict the variable's quality or characteristics.
# Categorical variables can be divided into two categories:
# 1. Nominal Variable: A nominal variable's categories have no natural ordering. Example: Gender, Blood groups
# 2. Ordinal Variable: A variable that allows the categories to be sorted is ordinal variables. Customer satisfaction (Excellent, Very Good, Good, Average, Bad, and so on) is an example.
# What Is a Chi-Square Test?
# The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# 
# **In this article, I tried Chi-Squre Test between 'smoke' and some categorial variables.**
# reference:
# "What is a Chi-Square Test? Formula, Examples & Application" by Avijeet Biswa (https://www.simplilearn.com/tutorials/statistics-tutorial/chi-square-test)
# # Importing
import scipy.stats as stats # It has all the probability distributions available along with many statistical functions.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
sns.set(style="darkgrid")
from scipy.stats import chi2, chi2_contingency
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# Reading the csv
data_path = "../input/smoking-dataset-from-uk/smoking.csv"
df = pd.read_csv(data_path)
smoking = df.copy()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# inspect data, print top 5
smoking.head(5)
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# bottom 5 rows:
smoking.tail(5)
# # EDA
# I refered "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing') for EDA.
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# get the size of dataframe
print("Rows : ", smoking.shape[0])
print("Columns : ", smoking.shape[1])
print("\nFeatures : \n", smoking.columns.tolist())
print("\nMissing values : ", smoking.isnull().sum().values.sum())
print("\nUnique values : \n", smoking.nunique())
smoking = smoking.drop("Unnamed: 0", axis=1)
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
smoking.info()
# Types of variables
# * Categorical varibles - 'gender', 'marital_status', 'highest_qualification','nationality', 'ethnicity', 'gross_income', 'region', 'smoke','type'
# * Quantitative variables -'age','amt_weekends','amt_weekdays'
# There are 421 missing values in 'amt_weekends', 'amt_weekdays' and 'type'.
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# changing object dtype to category to save memory
smoking.gender = smoking["gender"].astype("category")
smoking.marital_status = smoking["marital_status"].astype("category")
smoking.highest_qualification = smoking["highest_qualification"].astype("category")
smoking.nationality = smoking["nationality"].astype("category")
smoking.ethnicity = smoking["ethnicity"].astype("category")
smoking.gross_income = smoking["gross_income"].astype("category")
smoking.region = smoking["region"].astype("category")
smoking.smoke = smoking["smoke"].astype("category")
smoking.type = smoking["type"].astype("category")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
smoking.describe()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
smoking.describe(include="category")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
list_col = smoking.select_dtypes(["category"]).columns
for i in range(len(list_col)):
print(smoking[list_col[i]].value_counts())
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
def dist_box(data):
# function plots a combined graph for univariate analysis of continous variable
# to check spread, central tendency , dispersion and outliers
Name = data.name.upper()
fig, (ax_box, ax_dis) = plt.subplots(
2, 1, gridspec_kw={"height_ratios": (0.25, 0.75)}, figsize=(8, 5)
)
mean = data.mean()
median = data.median()
mode = data.mode().tolist()[0]
fig.suptitle("SPREAD OF DATA FOR " + Name, fontsize=18, fontweight="bold")
sns.boxplot(x=data, showmeans=True, orient="h", color="violet", ax=ax_box)
ax_box.set(xlabel="")
sns.distplot(data, kde=False, color="blue", ax=ax_dis)
ax_dis.axvline(mean, color="r", linestyle="--", linewidth=2)
ax_dis.axvline(median, color="g", linestyle="-", linewidth=2)
ax_dis.axvline(mode, color="y", linestyle="-", linewidth=2)
plt.legend({"Mean": mean, "Median": median, "Mode": mode})
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# select all quantitative columns for checking the spread
list_col = smoking.select_dtypes([np.number]).columns
for i in range(len(list_col)):
dist_box(smoking[list_col[i]])
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# Function to create barplots that indicate percentage for each category.
def bar_perc(plot, feature):
total = len(feature) # length of the column
for p in plot.patches:
percentage = "{:.1f}%".format(
100 * p.get_height() / total
) # percentage of each class of the category
x = p.get_x() + p.get_width() / 2 - 0.05 # width of the plot
y = p.get_y() + p.get_height() # hieght of the plot
plot.annotate(percentage, (x, y), size=12) # annotate the percentage
list_col = [
"gender",
"marital_status",
"highest_qualification",
"nationality",
"ethnicity",
"gross_income",
"region",
"smoke",
"type",
]
plt.figure(figsize=(25, 20))
for i in range(len(list_col)):
plt.subplot(3, 3, i + 1)
plt.title(list_col[i])
sns.histplot(data=smoking, x=smoking[list_col[i]])
sns.set(font_scale=1)
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
plt.figure(figsize=(5, 3))
sns.heatmap(smoking.corr(), annot=True, cmap="YlGn")
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
sns.pairplot(data=smoking, corner=True)
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# Gender vs all numerical variable
fig1, axes1 = plt.subplots(2, 2, figsize=(10, 8))
# select all quantitative columns for checking the spread
list_col = smoking.select_dtypes([np.number]).columns
for i in range(len(list_col)):
row = i // 2
col = i % 2
ax = axes1[row, col]
sns.boxplot(
y=smoking[list_col[i]], x=smoking["gender"], ax=ax, palette="PuBu", orient="v"
).set(title="Gender VS " + list_col[i].upper())
plt.tight_layout()
plt.show()
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs Sex
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="gender", data=smoking, palette="rainbow")
bar_perc(ax, smoking["gender"])
ax.set(title="Smoker vs Gender")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs marital_status
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="marital_status", data=smoking, palette="rainbow")
bar_perc(ax, smoking["marital_status"])
ax.set(title="Smoker vs marital_status")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs highest_qualification
plt.figure(figsize=(13, 5))
ax = sns.countplot(
x="smoke", hue="highest_qualification", data=smoking, palette="rainbow"
)
bar_perc(ax, smoking["highest_qualification"])
ax.set(title="Smoker vs highest_qualification")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs nationality
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="nationality", data=smoking, palette="rainbow")
bar_perc(ax, smoking["nationality"])
ax.set(title="Smoker vs nationality")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs ethnicity
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="ethnicity", data=smoking, palette="rainbow")
bar_perc(ax, smoking["ethnicity"])
ax.set(title="Smoker vs ethnicity")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs gross_income
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="gross_income", data=smoking, palette="rainbow")
bar_perc(ax, smoking["gross_income"])
ax.set(title="Smoker vs gross_income")
# code by "Insurance claims- EDA &Hypothesis Testing" by YOGITA DARADE ('https://www.kaggle.com/code/yogidsba/insurance-claims-eda-hypothesis-testing')
# smoker vs region
plt.figure(figsize=(13, 5))
ax = sns.countplot(x="smoke", hue="region", data=smoking, palette="rainbow")
bar_perc(ax, smoking["region"])
ax.set(title="Smoker vs region")
# # Chi-Square Test
# I refered "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence) for coding.
# # 1.Prove (or disprove) that there are not any difference in smoking between Female and Male
# Step 1: Define null and alternative hypothesis
# * H0:𝜇1=𝜇2 There are not any difference in smoking between Female and Male
# * H1:μ1𝜇2 There are any difference in smoking between Female and Male
# Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.
# α = 0.05
# Step 3: Identify gender is norminal varialble. So The Chi-Squre test is one of the good options to prove. The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Contingency Table
contingency_table = pd.crosstab(smoking["gender"], smoking["smoke"])
print("contingency_table :-\n", contingency_table)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Observed Values
Observed_Values = contingency_table.values
print("Observed Values :-\n", Observed_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
b = stats.chi2_contingency(contingency_table)
Expected_Values = b[3]
print("Expected Values :-\n", Expected_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Degree of Freedom
no_of_rows = len(contingency_table.iloc[0:2, 0])
no_of_columns = len(contingency_table.iloc[0, 0:2])
df = (no_of_rows - 1) * (no_of_columns - 1)
print("Degree of Freedom:-", df)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Significance Level 5%
alpha = 0.05
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
chi_square = sum([(o - e) ** 2.0 / e for o, e in zip(Observed_Values, Expected_Values)])
chi_square_statistic = chi_square[0] + chi_square[1]
print("chi-square statistic:-", chi_square_statistic)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# critical_value
critical_value = chi2.ppf(q=1 - alpha, df=df)
print("critical_value:", critical_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# p-value
p_value = 1 - chi2.cdf(x=chi_square_statistic, df=df)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
print("Significance level: ", alpha)
print("Degree of Freedom: ", df)
print("chi-square statistic:", chi_square_statistic)
print("critical_value:", critical_value)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# compare chi_square_statistic with critical_value and p-value which is the probability of getting chi-square>0.09 (chi_square_statistic)
if chi_square_statistic >= critical_value:
print("Reject H0,there are not any difference in smoking between Female and Male")
else:
print("Retain H0,there are not any difference in smoking between Female and Male")
if p_value <= alpha:
print("Reject H0,there are not any difference in smoking between Female and Male")
else:
print("Retain H0,there are not any difference in smoking between Female and Male")
# # 2.Prove (or disprove) that there are not any difference in income between smoker and non-smoker
# Step 1: Define null and alternative hypothesis
# * H0:𝜇1=𝜇2 There are not any difference in income between smoker and non-smoker
# * H1:μ1𝜇2 There are any difference in income between smoker and non-smoker
# Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.
# α = 0.05
# Step 3: Identify income is ordinal varialbles. So The Chi-Squre test is one of the good options to prove. The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Contingency Table
contingency_table = pd.crosstab(smoking["gross_income"], smoking["smoke"])
print("contingency_table :-\n", contingency_table)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Observed Values
Observed_Values = contingency_table.values
print("Observed Values :-\n", Observed_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
b = stats.chi2_contingency(contingency_table)
Expected_Values = b[3]
print("Expected Values :-\n", Expected_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Degree of Freedom
no_of_rows = len(contingency_table.iloc[0:11, 0])
no_of_columns = len(contingency_table.iloc[0, 0:2])
df = (no_of_rows - 1) * (no_of_columns - 1)
print("Degree of Freedom:-", df)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Significance Level 5%
alpha = 0.05
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
chi_square = sum([(o - e) ** 2.0 / e for o, e in zip(Observed_Values, Expected_Values)])
chi_square_statistic = chi_square[0] + chi_square[1]
print("chi-square statistic:-", chi_square_statistic)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# critical_value
critical_value = chi2.ppf(q=1 - alpha, df=df)
print("critical_value:", critical_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# p-value
p_value = 1 - chi2.cdf(x=chi_square_statistic, df=df)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
print("Significance level: ", alpha)
print("Degree of Freedom: ", df)
print("chi-square statistic:", chi_square_statistic)
print("critical_value:", critical_value)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# compare chi_square_statistic with critical_value and p-value which is the probability of getting chi-square>0.09 (chi_square_statistic)
if chi_square_statistic >= critical_value:
print(
"Reject H0,there are not any difference in income between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in income between smoker and non-smoker"
)
if p_value <= alpha:
print(
"Reject H0,there are not any difference in income between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in income between smoker and non-smoker"
)
# # 3.Prove (or disprove) that there are not any difference in highest_qualification between smoker and non-smoker
# Step 1: Define null and alternative hypothesis
# * H0:𝜇1=𝜇2 There are not any difference in highest_qualification between smoker and non-smoker
# * H1:μ1𝜇2 There are any difference in highest_qualification between smoker and non-smoker
# Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.
# α = 0.05
# Step 3: Identify highest_qualification is ordinal varialble. So The Chi-Squre test is one of the good options to prove. The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Contingency Table
contingency_table = pd.crosstab(smoking["highest_qualification"], smoking["smoke"])
print("contingency_table :-\n", contingency_table)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Observed Values
Observed_Values = contingency_table.values
print("Observed Values :-\n", Observed_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
b = stats.chi2_contingency(contingency_table)
Expected_Values = b[3]
print("Expected Values :-\n", Expected_Values)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Degree of Freedom
no_of_rows = len(contingency_table.iloc[0:9, 0])
no_of_columns = len(contingency_table.iloc[0, 0:2])
df = (no_of_rows - 1) * (no_of_columns - 1)
print("Degree of Freedom:-", df)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# Significance Level 5%
alpha = 0.05
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
chi_square = sum([(o - e) ** 2.0 / e for o, e in zip(Observed_Values, Expected_Values)])
chi_square_statistic = chi_square[0] + chi_square[1]
print("chi-square statistic:-", chi_square_statistic)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# critical_value
critical_value = chi2.ppf(q=1 - alpha, df=df)
print("critical_value:", critical_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# p-value
p_value = 1 - chi2.cdf(x=chi_square_statistic, df=df)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
print("Significance level: ", alpha)
print("Degree of Freedom: ", df)
print("chi-square statistic:", chi_square_statistic)
print("critical_value:", critical_value)
print("p-value:", p_value)
# code by "Chi-Square Test of Independence" by KULDEEP(https://www.kaggle.com/code/kuldeepnpatel/chi-square-test-of-independence)
# compare chi_square_statistic with critical_value and p-value which is the probability of getting chi-square>0.09 (chi_square_statistic)
if chi_square_statistic >= critical_value:
print(
"Reject H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
if p_value <= alpha:
print(
"Reject H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
else:
print(
"Retain H0,there are not any difference in highest_qualification between smoker and non-smoker"
)
| false | 0 | 7,817 | 21 | 8,404 | 7,817 |
||
129809622
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
from sklearn.impute import SimpleImputer, KNNImputer
from sklearn.preprocessing import (
MinMaxScaler,
StandardScaler,
PowerTransformer,
FunctionTransformer,
)
from sklearn.feature_selection import mutual_info_classif
from sklearn.model_selection import KFold, StratifiedKFold, RandomizedSearchCV
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from skopt.space import Real, Categorical, Integer
from skopt import BayesSearchCV
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.decomposition import PCA
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import (
RandomForestClassifier,
VotingClassifier,
GradientBoostingClassifier,
)
from sklearn.experimental import enable_iterative_imputer
# # Load Data, Preprocess and EDA
train1 = pd.read_csv("../input/tylerclassification/training1.csv").convert_dtypes()
train2 = pd.read_csv("../input/tylerclassification/training2.csv").convert_dtypes()
train = pd.concat([train1, train2]).convert_dtypes()
test = pd.read_csv("../input/tylerclassification/test.csv").convert_dtypes()
x_train = train.copy()
y_train = x_train.pop("label")
# # Define Constants
RANDOM_STATE = 42
N_SPLITS = 5
KF = StratifiedKFold(n_splits=N_SPLITS, shuffle=True, random_state=RANDOM_STATE)
def evaluate_model(X, y, model, kf):
scores = []
if hasattr(model, "named_steps"):
print(f"Fitting model: {model.named_steps}")
else:
print(f"Fitting model: {model.__class__.__name__}")
for i, (train_idx, val_idx) in enumerate(kf.split(X, y), start=1):
print(f"fold {i} of {kf.get_n_splits()}")
X_train, y_train = X.iloc[train_idx], y.iloc[train_idx]
X_val, y_val = X.iloc[val_idx], y.iloc[val_idx]
model.fit(X_train, y_train)
y_pred = model.predict(X_val)
scores.append(np.mean(y_pred == y_val))
return scores
# # Benchmark Model
rf = make_pipeline(SimpleImputer(strategy="median"), RandomForestClassifier())
scores = evaluate_model(x_train, y_train, rf, KF)
print(f"{rf.named_steps}: {np.mean(scores):.4f}")
nb = make_pipeline(
SimpleImputer(strategy="median"), FunctionTransformer(np.log1p), GaussianNB()
)
scores = evaluate_model(x_train, y_train, nb, KF)
print(f"{nb.named_steps}: {np.mean(scores):.4f}")
knn = make_pipeline(SimpleImputer(strategy="median"), KNeighborsClassifier())
scores = evaluate_model(x_train, y_train, knn, KF)
print(f"{knn.named_steps}: {np.mean(scores):.4f}")
# # Ensemble
models = {
"LogisticRegression": {
"classifier__C": Real(1e-6, 1e6, prior="log-uniform"),
"classifier__penalty": Categorical(["l2"]),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"RandomForestClassifier": {
"classifier__n_estimators": Integer(10, 150),
"classifier__max_depth": Integer(10, 50),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"MLPClassifier": {
"classifier__hidden_layer_sizes": Categorical([(50,), (100,), (50, 50)]),
"classifier__activation": Categorical(["tanh", "relu"]),
"classifier__solver": Categorical(["sgd", "adam"]),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"ExtraTreesClassifier": {
"classifier__n_estimators": Integer(10, 150),
"classifier__max_depth": Integer(10, 50),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"GaussianNB": {
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
}
ens.fit(x_train, y_train)
class Ensemble:
def __init__(self, models, kf, random_state):
self.models = models
self.kf = kf
self.random_state = random_state
self.fitted_models = {}
self.best_scores = {}
def fit(self, X, y, ensemble=False, verbose=1):
for model_name in self.models:
print(f"Tuning model: {model_name}")
# Create the pipeline
model = Pipeline(
[
(
"preprocessor",
ColumnTransformer(
transformers=[
(
"num",
SimpleImputer(),
X.select_dtypes(include=np.number).columns.tolist(),
)
],
remainder="passthrough",
),
),
("scaler", StandardScaler()),
("pca", PCA()),
("classifier", globals()[model_name]()),
]
)
# Define the search space
search_space = self.models[model_name]
# Define the Bayes search object
"""
opt = BayesSearchCV(
estimator=model,
search_spaces=search_space,
n_iter=50,
cv=self.kf,
n_jobs=-1,
verbose=verbose,
scoring='accuracy' # replace this with your preferred scoring metric
)
"""
opt = RandomizedSearchCV(
estimator=model,
param_distributions=search_space,
n_iter=50,
cv=self.kf,
n_jobs=-1,
scoring="accuracy",
random_state=self.random_state,
)
# Fit model
opt.fit(X, y)
print(f"Best score for {model_name}: {opt.best_score_:.4f}")
self.fitted_models[model_name] = opt.best_estimator_
if ensemble:
self.ensemble = VotingClassifier(
[(name, model) for name, model in self.fitted_models.items()],
voting="soft",
)
self.ensemble.fit(X, y)
return self
ens = Ensemble(models, kf=KF, random_state=RANDOM_STATE)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/809/129809622.ipynb
| null | null |
[{"Id": 129809622, "ScriptId": 38560343, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 9319351, "CreationDate": "05/16/2023 16:05:25", "VersionNumber": 1.0, "Title": "tylerWork", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 182.0, "LinesInsertedFromPrevious": 182.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
from sklearn.impute import SimpleImputer, KNNImputer
from sklearn.preprocessing import (
MinMaxScaler,
StandardScaler,
PowerTransformer,
FunctionTransformer,
)
from sklearn.feature_selection import mutual_info_classif
from sklearn.model_selection import KFold, StratifiedKFold, RandomizedSearchCV
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from skopt.space import Real, Categorical, Integer
from skopt import BayesSearchCV
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.decomposition import PCA
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import (
RandomForestClassifier,
VotingClassifier,
GradientBoostingClassifier,
)
from sklearn.experimental import enable_iterative_imputer
# # Load Data, Preprocess and EDA
train1 = pd.read_csv("../input/tylerclassification/training1.csv").convert_dtypes()
train2 = pd.read_csv("../input/tylerclassification/training2.csv").convert_dtypes()
train = pd.concat([train1, train2]).convert_dtypes()
test = pd.read_csv("../input/tylerclassification/test.csv").convert_dtypes()
x_train = train.copy()
y_train = x_train.pop("label")
# # Define Constants
RANDOM_STATE = 42
N_SPLITS = 5
KF = StratifiedKFold(n_splits=N_SPLITS, shuffle=True, random_state=RANDOM_STATE)
def evaluate_model(X, y, model, kf):
scores = []
if hasattr(model, "named_steps"):
print(f"Fitting model: {model.named_steps}")
else:
print(f"Fitting model: {model.__class__.__name__}")
for i, (train_idx, val_idx) in enumerate(kf.split(X, y), start=1):
print(f"fold {i} of {kf.get_n_splits()}")
X_train, y_train = X.iloc[train_idx], y.iloc[train_idx]
X_val, y_val = X.iloc[val_idx], y.iloc[val_idx]
model.fit(X_train, y_train)
y_pred = model.predict(X_val)
scores.append(np.mean(y_pred == y_val))
return scores
# # Benchmark Model
rf = make_pipeline(SimpleImputer(strategy="median"), RandomForestClassifier())
scores = evaluate_model(x_train, y_train, rf, KF)
print(f"{rf.named_steps}: {np.mean(scores):.4f}")
nb = make_pipeline(
SimpleImputer(strategy="median"), FunctionTransformer(np.log1p), GaussianNB()
)
scores = evaluate_model(x_train, y_train, nb, KF)
print(f"{nb.named_steps}: {np.mean(scores):.4f}")
knn = make_pipeline(SimpleImputer(strategy="median"), KNeighborsClassifier())
scores = evaluate_model(x_train, y_train, knn, KF)
print(f"{knn.named_steps}: {np.mean(scores):.4f}")
# # Ensemble
models = {
"LogisticRegression": {
"classifier__C": Real(1e-6, 1e6, prior="log-uniform"),
"classifier__penalty": Categorical(["l2"]),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"RandomForestClassifier": {
"classifier__n_estimators": Integer(10, 150),
"classifier__max_depth": Integer(10, 50),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"MLPClassifier": {
"classifier__hidden_layer_sizes": Categorical([(50,), (100,), (50, 50)]),
"classifier__activation": Categorical(["tanh", "relu"]),
"classifier__solver": Categorical(["sgd", "adam"]),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"ExtraTreesClassifier": {
"classifier__n_estimators": Integer(10, 150),
"classifier__max_depth": Integer(10, 50),
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
"GaussianNB": {
"preprocessor__num__strategy": Categorical(["mean", "median"]),
"pca__n_components": Integer(10, 50),
},
}
ens.fit(x_train, y_train)
class Ensemble:
def __init__(self, models, kf, random_state):
self.models = models
self.kf = kf
self.random_state = random_state
self.fitted_models = {}
self.best_scores = {}
def fit(self, X, y, ensemble=False, verbose=1):
for model_name in self.models:
print(f"Tuning model: {model_name}")
# Create the pipeline
model = Pipeline(
[
(
"preprocessor",
ColumnTransformer(
transformers=[
(
"num",
SimpleImputer(),
X.select_dtypes(include=np.number).columns.tolist(),
)
],
remainder="passthrough",
),
),
("scaler", StandardScaler()),
("pca", PCA()),
("classifier", globals()[model_name]()),
]
)
# Define the search space
search_space = self.models[model_name]
# Define the Bayes search object
"""
opt = BayesSearchCV(
estimator=model,
search_spaces=search_space,
n_iter=50,
cv=self.kf,
n_jobs=-1,
verbose=verbose,
scoring='accuracy' # replace this with your preferred scoring metric
)
"""
opt = RandomizedSearchCV(
estimator=model,
param_distributions=search_space,
n_iter=50,
cv=self.kf,
n_jobs=-1,
scoring="accuracy",
random_state=self.random_state,
)
# Fit model
opt.fit(X, y)
print(f"Best score for {model_name}: {opt.best_score_:.4f}")
self.fitted_models[model_name] = opt.best_estimator_
if ensemble:
self.ensemble = VotingClassifier(
[(name, model) for name, model in self.fitted_models.items()],
voting="soft",
)
self.ensemble.fit(X, y)
return self
ens = Ensemble(models, kf=KF, random_state=RANDOM_STATE)
| false | 0 | 1,718 | 0 | 1,718 | 1,718 |
||
129809679
|
<jupyter_start><jupyter_text>McDonald's Nutrient Information
The dataset contains information from [McDonald's official website ](https://www.mcdonaldsindia.com/nutrition.html).
It has calorie content and nutrition information from their entire menu.
The dataset is specific to the Indian McDonald's menu.
Kaggle dataset identifier: mcdonalds-nutrient-information
<jupyter_script>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("/kaggle/input/mcdonalds-nutrient-information/McDonald Nutrient.csv")
df.head()
print("NA Values...")
df.isna().sum()
print("Unique Menu Items and their count")
print(df["Menu Items"].nunique())
print(df["Menu Items"].unique())
df.hist(figsize=(10, 10), bins=20)
plt.figure(figsize=(13, 10))
sns.heatmap(df.corr(), annot=True)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/809/129809679.ipynb
|
mcdonalds-nutrient-information
|
squidbob
|
[{"Id": 129809679, "ScriptId": 38606046, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8095214, "CreationDate": "05/16/2023 16:05:53", "VersionNumber": 1.0, "Title": "Basic DV", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 23.0, "LinesInsertedFromPrevious": 23.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 3}]
|
[{"Id": 186183560, "KernelVersionId": 129809679, "SourceDatasetVersionId": 5699843}]
|
[{"Id": 5699843, "DatasetId": 3277530, "DatasourceVersionId": 5775503, "CreatorUserId": 8095214, "LicenseName": "CC0: Public Domain", "CreationDate": "05/16/2023 15:50:10", "VersionNumber": 1.0, "Title": "McDonald's Nutrient Information", "Slug": "mcdonalds-nutrient-information", "Subtitle": "Make sure your caloric intake is in check", "Description": "The dataset contains information from [McDonald's official website ](https://www.mcdonaldsindia.com/nutrition.html).\n\nIt has calorie content and nutrition information from their entire menu.\n\nThe dataset is specific to the Indian McDonald's menu.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3277530, "CreatorUserId": 8095214, "OwnerUserId": 8095214.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5699843.0, "CurrentDatasourceVersionId": 5775503.0, "ForumId": 3343219, "Type": 2, "CreationDate": "05/16/2023 15:50:10", "LastActivityDate": "05/16/2023", "TotalViews": 199, "TotalDownloads": 57, "TotalVotes": 0, "TotalKernels": 1}]
|
[{"Id": 8095214, "UserName": "squidbob", "DisplayName": "Yash Chauhan", "RegisterDate": "08/09/2021", "PerformanceTier": 2}]
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("/kaggle/input/mcdonalds-nutrient-information/McDonald Nutrient.csv")
df.head()
print("NA Values...")
df.isna().sum()
print("Unique Menu Items and their count")
print(df["Menu Items"].nunique())
print(df["Menu Items"].unique())
df.hist(figsize=(10, 10), bins=20)
plt.figure(figsize=(13, 10))
sns.heatmap(df.corr(), annot=True)
| false | 1 | 153 | 3 | 254 | 153 |
||
129809628
|
<jupyter_start><jupyter_text>Company Bankruptcy Prediction
### Similar Datasets
- The Boston House-Price Data: [LINK](https://www.kaggle.com/fedesoriano/the-boston-houseprice-data)
- Gender Pay Gap Dataset: [LINK](https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset)
- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)
### Context
The data were collected from the Taiwan Economic Journal for the years 1999 to 2009. Company bankruptcy was defined based on the business regulations of the Taiwan Stock Exchange.
### Attribute Information
**Version 2:** Updated column names and description to make the data easier to understand (Y = Output feature, X = Input features)
Y - Bankrupt?: Class label
X1 - ROA(C) before interest and depreciation before interest: Return On Total Assets(C)
X2 - ROA(A) before interest and % after tax: Return On Total Assets(A)
X3 - ROA(B) before interest and depreciation after tax: Return On Total Assets(B)
X4 - Operating Gross Margin: Gross Profit/Net Sales
X5 - Realized Sales Gross Margin: Realized Gross Profit/Net Sales
X6 - Operating Profit Rate: Operating Income/Net Sales
X7 - Pre-tax net Interest Rate: Pre-Tax Income/Net Sales
X8 - After-tax net Interest Rate: Net Income/Net Sales
X9 - Non-industry income and expenditure/revenue: Net Non-operating Income Ratio
X10 - Continuous interest rate (after tax): Net Income-Exclude Disposal Gain or Loss/Net Sales
X11 - Operating Expense Rate: Operating Expenses/Net Sales
X12 - Research and development expense rate: (Research and Development Expenses)/Net Sales
X13 - Cash flow rate: Cash Flow from Operating/Current Liabilities
X14 - Interest-bearing debt interest rate: Interest-bearing Debt/Equity
X15 - Tax rate (A): Effective Tax Rate
X16 - Net Value Per Share (B): Book Value Per Share(B)
X17 - Net Value Per Share (A): Book Value Per Share(A)
X18 - Net Value Per Share (C): Book Value Per Share(C)
X19 - Persistent EPS in the Last Four Seasons: EPS-Net Income
X20 - Cash Flow Per Share
X21 - Revenue Per Share (Yuan ¥): Sales Per Share
X22 - Operating Profit Per Share (Yuan ¥): Operating Income Per Share
X23 - Per Share Net profit before tax (Yuan ¥): Pretax Income Per Share
X24 - Realized Sales Gross Profit Growth Rate
X25 - Operating Profit Growth Rate: Operating Income Growth
X26 - After-tax Net Profit Growth Rate: Net Income Growth
X27 - Regular Net Profit Growth Rate: Continuing Operating Income after Tax Growth
X28 - Continuous Net Profit Growth Rate: Net Income-Excluding Disposal Gain or Loss Growth
X29 - Total Asset Growth Rate: Total Asset Growth
X30 - Net Value Growth Rate: Total Equity Growth
X31 - Total Asset Return Growth Rate Ratio: Return on Total Asset Growth
X32 - Cash Reinvestment %: Cash Reinvestment Ratio
X33 - Current Ratio
X34 - Quick Ratio: Acid Test
X35 - Interest Expense Ratio: Interest Expenses/Total Revenue
X36 - Total debt/Total net worth: Total Liability/Equity Ratio
X37 - Debt ratio %: Liability/Total Assets
X38 - Net worth/Assets: Equity/Total Assets
X39 - Long-term fund suitability ratio (A): (Long-term Liability+Equity)/Fixed Assets
X40 - Borrowing dependency: Cost of Interest-bearing Debt
X41 - Contingent liabilities/Net worth: Contingent Liability/Equity
X42 - Operating profit/Paid-in capital: Operating Income/Capital
X43 - Net profit before tax/Paid-in capital: Pretax Income/Capital
X44 - Inventory and accounts receivable/Net value: (Inventory+Accounts Receivables)/Equity
X45 - Total Asset Turnover
X46 - Accounts Receivable Turnover
X47 - Average Collection Days: Days Receivable Outstanding
X48 - Inventory Turnover Rate (times)
X49 - Fixed Assets Turnover Frequency
X50 - Net Worth Turnover Rate (times): Equity Turnover
X51 - Revenue per person: Sales Per Employee
X52 - Operating profit per person: Operation Income Per Employee
X53 - Allocation rate per person: Fixed Assets Per Employee
X54 - Working Capital to Total Assets
X55 - Quick Assets/Total Assets
X56 - Current Assets/Total Assets
X57 - Cash/Total Assets
X58 - Quick Assets/Current Liability
X59 - Cash/Current Liability
X60 - Current Liability to Assets
X61 - Operating Funds to Liability
X62 - Inventory/Working Capital
X63 - Inventory/Current Liability
X64 - Current Liabilities/Liability
X65 - Working Capital/Equity
X66 - Current Liabilities/Equity
X67 - Long-term Liability to Current Assets
X68 - Retained Earnings to Total Assets
X69 - Total income/Total expense
X70 - Total expense/Assets
X71 - Current Asset Turnover Rate: Current Assets to Sales
X72 - Quick Asset Turnover Rate: Quick Assets to Sales
X73 - Working capitcal Turnover Rate: Working Capital to Sales
X74 - Cash Turnover Rate: Cash to Sales
X75 - Cash Flow to Sales
X76 - Fixed Assets to Assets
X77 - Current Liability to Liability
X78 - Current Liability to Equity
X79 - Equity to Long-term Liability
X80 - Cash Flow to Total Assets
X81 - Cash Flow to Liability
X82 - CFO to Assets
X83 - Cash Flow to Equity
X84 - Current Liability to Current Assets
X85 - Liability-Assets Flag: 1 if Total Liability exceeds Total Assets, 0 otherwise
X86 - Net Income to Total Assets
X87 - Total assets to GNP price
X88 - No-credit Interval
X89 - Gross Profit to Sales
X90 - Net Income to Stockholder's Equity
X91 - Liability to Equity
X92 - Degree of Financial Leverage (DFL)
X93 - Interest Coverage Ratio (Interest expense to EBIT)
X94 - Net Income Flag: 1 if Net Income is Negative for the last two years, 0 otherwise
X95 - Equity to Liability
### Source
Deron Liang and Chih-Fong Tsai, deronliang '@' gmail.com; cftsai '@' mgt.ncu.edu.tw, National Central University, Taiwan
The data was obtained from UCI Machine Learning Repository: [https://archive.ics.uci.edu/ml/datasets/Taiwanese+Bankruptcy+Prediction](https://archive.ics.uci.edu/ml/datasets/Taiwanese+Bankruptcy+Prediction)
### Relevant Papers
Liang, D., Lu, C.-C., Tsai, C.-F., and Shih, G.-A. (2016) Financial Ratios and Corporate Governance Indicators in Bankruptcy Prediction: A Comprehensive Study. European Journal of Operational Research, vol. 252, no. 2, pp. 561-572.
[https://www.sciencedirect.com/science/article/pii/S0377221716000412](https://www.sciencedirect.com/science/article/pii/S0377221716000412)
Kaggle dataset identifier: company-bankruptcy-prediction
<jupyter_code>import pandas as pd
df = pd.read_csv('company-bankruptcy-prediction/data.csv')
df.info()
<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 6819 entries, 0 to 6818
Data columns (total 96 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Bankrupt? 6819 non-null int64
1 ROA(C) before interest and depreciation before interest 6819 non-null float64
2 ROA(A) before interest and % after tax 6819 non-null float64
3 ROA(B) before interest and depreciation after tax 6819 non-null float64
4 Operating Gross Margin 6819 non-null float64
5 Realized Sales Gross Margin 6819 non-null float64
6 Operating Profit Rate 6819 non-null float64
7 Pre-tax net Interest Rate 6819 non-null float64
8 After-tax net Interest Rate 6819 non-null float64
9 Non-industry income and expenditure/revenue 6819 non-null float64
10 Continuous interest rate (after tax) 6819 non-null float64
11 Operating Expense Rate 6819 non-null float64
12 Research and development expense rate 6819 non-null float64
13 Cash flow rate 6819 non-null float64
14 Interest-bearing debt interest rate 6819 non-null float64
15 Tax rate (A) 6819 non-null float64
16 Net Value Per Share (B) 6819 non-null float64
17 Net Value Per Share (A) 6819 non-null float64
18 Net Value Per Share (C) 6819 non-null float64
19 Persistent EPS in the Last Four Seasons 6819 non-null float64
20 Cash Flow Per Share 6819 non-null float64
21 Revenue Per Share (Yuan ¥) 6819 non-null float64
22 Operating Profit Per Share (Yuan ¥) 6819 non-null float64
23 Per Share Net profit before tax (Yuan ¥) 6819 non-null float64
24 Realized Sales Gross Profit Growth Rate 6819 non-null float64
25 Operating Profit Growth Rate 6819 non-null float64
26 After-tax Net Profit Growth Rate 6819 non-null float64
27 Regular Net Profit Growth Rate 6819 non-null float64
28 Continuous Net Profit Growth Rate 6819 non-null float64
29 Total Asset Growth Rate 6819 non-null float64
30 Net Value Growth Rate 6819 non-null float64
31 Total Asset Return Growth Rate Ratio 6819 non-null float64
32 Cash Reinvestment % 6819 non-null float64
33 Current Ratio 6819 non-null float64
34 Quick Ratio 6819 non-null float64
35 Interest Expense Ratio 6819 non-null float64
36 Total debt/Total net worth 6819 non-null float64
37 Debt ratio % 6819 non-null float64
38 Net worth/Assets 6819 non-null float64
39 Long-term fund suitability ratio (A) 6819 non-null float64
40 Borrowing dependency 6819 non-null float64
41 Contingent liabilities/Net worth 6819 non-null float64
42 Operating profit/Paid-in capital 6819 non-null float64
43 Net profit before tax/Paid-in capital 6819 non-null float64
44 Inventory and accounts receivable/Net value 6819 non-null float64
45 Total Asset Turnover 6819 non-null float64
46 Accounts Receivable Turnover 6819 non-null float64
47 Average Collection Days 6819 non-null float64
48 Inventory Turnover Rate (times) 6819 non-null float64
49 Fixed Assets Turnover Frequency 6819 non-null float64
50 Net Worth Turnover Rate (times) 6819 non-null float64
51 Revenue per person 6819 non-null float64
52 Operating profit per person 6819 non-null float64
53 Allocation rate per person 6819 non-null float64
54 Working Capital to Total Assets 6819 non-null float64
55 Quick Assets/Total Assets 6819 non-null float64
56 Current Assets/Total Assets 6819 non-null float64
57 Cash/Total Assets 6819 non-null float64
58 Quick Assets/Current Liability 6819 non-null float64
59 Cash/Current Liability 6819 non-null float64
60 Current Liability to Assets 6819 non-null float64
61 Operating Funds to Liability 6819 non-null float64
62 Inventory/Working Capital 6819 non-null float64
63 Inventory/Current Liability 6819 non-null float64
64 Current Liabilities/Liability 6819 non-null float64
65 Working Capital/Equity 6819 non-null float64
66 Current Liabilities/Equity 6819 non-null float64
67 Long-term Liability to Current Assets 6819 non-null float64
68 Retained Earnings to Total Assets 6819 non-null float64
69 Total income/Total expense 6819 non-null float64
70 Total expense/Assets 6819 non-null float64
71 Current Asset Turnover Rate 6819 non-null float64
72 Quick Asset Turnover Rate 6819 non-null float64
73 Working capitcal Turnover Rate 6819 non-null float64
74 Cash Turnover Rate 6819 non-null float64
75 Cash Flow to Sales 6819 non-null float64
76 Fixed Assets to Assets 6819 non-null float64
77 Current Liability to Liability 6819 non-null float64
78 Current Liability to Equity 6819 non-null float64
79 Equity to Long-term Liability 6819 non-null float64
80 Cash Flow to Total Assets 6819 non-null float64
81 Cash Flow to Liability 6819 non-null float64
82 CFO to Assets 6819 non-null float64
83 Cash Flow to Equity 6819 non-null float64
84 Current Liability to Current Assets 6819 non-null float64
85 Liability-Assets Flag 6819 non-null int64
86 Net Income to Total Assets 6819 non-null float64
87 Total assets to GNP price 6819 non-null float64
88 No-credit Interval 6819 non-null float64
89 Gross Profit to Sales 6819 non-null float64
90 Net Income to Stockholder's Equity 6819 non-null float64
91 Liability to Equity 6819 non-null float64
92 Degree of Financial Leverage (DFL) 6819 non-null float64
93 Interest Coverage Ratio (Interest expense to EBIT) 6819 non-null float64
94 Net Income Flag 6819 non-null int64
95 Equity to Liability 6819 non-null float64
dtypes: float64(93), int64(3)
memory usage: 5.0 MB
<jupyter_text>Examples:
{
"Bankrupt?": 1.0,
" ROA(C) before interest and depreciation before interest": 0.3705942573,
" ROA(A) before interest and % after tax": 0.4243894461,
" ROA(B) before interest and depreciation after tax": 0.4057497725,
" Operating Gross Margin": 0.6014572133,
" Realized Sales Gross Margin": 0.6014572133,
" Operating Profit Rate": 0.9989692032,
" Pre-tax net Interest Rate": 0.7968871459,
" After-tax net Interest Rate": 0.8088093609,
" Non-industry income and expenditure/revenue": 0.3026464339,
" Continuous interest rate (after tax)": 0.7809848502000001,
" Operating Expense Rate": 0.0001256969,
" Research and development expense rate": 0.0,
" Cash flow rate": 0.4581431435,
" Interest-bearing debt interest rate": 0.0007250725000000001,
" Tax rate (A)": 0.0,
" Net Value Per Share (B)": 0.1479499389,
" Net Value Per Share (A)": 0.1479499389,
" Net Value Per Share (C)": 0.1479499389,
" Persistent EPS in the Last Four Seasons": 0.16914058810000002,
"...": "and 76 more columns"
}
{
"Bankrupt?": 1.0,
" ROA(C) before interest and depreciation before interest": 0.4642909375,
" ROA(A) before interest and % after tax": 0.53821413,
" ROA(B) before interest and depreciation after tax": 0.5167300177,
" Operating Gross Margin": 0.6102350855,
" Realized Sales Gross Margin": 0.6102350855,
" Operating Profit Rate": 0.9989459782000001,
" Pre-tax net Interest Rate": 0.7973801913,
" After-tax net Interest Rate": 0.8093007257,
" Non-industry income and expenditure/revenue": 0.3035564303,
" Continuous interest rate (after tax)": 0.7815059743,
" Operating Expense Rate": 0.0002897851,
" Research and development expense rate": 0.0,
" Cash flow rate": 0.4618672572,
" Interest-bearing debt interest rate": 0.0006470647000000001,
" Tax rate (A)": 0.0,
" Net Value Per Share (B)": 0.18225106400000002,
" Net Value Per Share (A)": 0.18225106400000002,
" Net Value Per Share (C)": 0.18225106400000002,
" Persistent EPS in the Last Four Seasons": 0.20894393500000003,
"...": "and 76 more columns"
}
{
"Bankrupt?": 1.0,
" ROA(C) before interest and depreciation before interest": 0.4260712719,
" ROA(A) before interest and % after tax": 0.4990187527,
" ROA(B) before interest and depreciation after tax": 0.47229509070000003,
" Operating Gross Margin": 0.6014500065,
" Realized Sales Gross Margin": 0.601363525,
" Operating Profit Rate": 0.9988573535,
" Pre-tax net Interest Rate": 0.7964033693,
" After-tax net Interest Rate": 0.8083875215,
" Non-industry income and expenditure/revenue": 0.3020351773,
" Continuous interest rate (after tax)": 0.7802839362,
" Operating Expense Rate": 0.0002361297,
" Research and development expense rate": 25500000.0,
" Cash flow rate": 0.4585205875,
" Interest-bearing debt interest rate": 0.0007900790000000001,
" Tax rate (A)": 0.0,
" Net Value Per Share (B)": 0.1779107497,
" Net Value Per Share (A)": 0.1779107497,
" Net Value Per Share (C)": 0.193712865,
" Persistent EPS in the Last Four Seasons": 0.1805805049,
"...": "and 76 more columns"
}
{
"Bankrupt?": 1.0,
" ROA(C) before interest and depreciation before interest": 0.39984400140000004,
" ROA(A) before interest and % after tax": 0.4512647187,
" ROA(B) before interest and depreciation after tax": 0.4577332834,
" Operating Gross Margin": 0.5835411292,
" Realized Sales Gross Margin": 0.5835411292,
" Operating Profit Rate": 0.9986997471000001,
" Pre-tax net Interest Rate": 0.7969669683,
" After-tax net Interest Rate": 0.8089655977,
" Non-industry income and expenditure/revenue": 0.30334953600000003,
" Continuous interest rate (after tax)": 0.7812409912,
" Operating Expense Rate": 0.00010788880000000001,
" Research and development expense rate": 0.0,
" Cash flow rate": 0.4657054427,
" Interest-bearing debt interest rate": 0.00044904490000000004,
" Tax rate (A)": 0.0,
" Net Value Per Share (B)": 0.1541865071,
" Net Value Per Share (A)": 0.1541865071,
" Net Value Per Share (C)": 0.1541865071,
" Persistent EPS in the Last Four Seasons": 0.1937222275,
"...": "and 76 more columns"
}
<jupyter_script># # Company Bankruptcy
# * Company bankruptcy occurs when a company cannot pay its debts and obligations to creditors, resulting in the company's assets being liquidated to repay those debts.
# * This can lead to the company ceasing operations and potentially going out of business.
# ## Why it is important to study bankruptcy
# * Studying bankruptcy is important because it is a significant economic event that affects individuals, businesses, and society as a whole.
# * Understanding bankruptcy can help make informed financial decisions and promote economic stability.
# ## About data source - Taiwan Economic Journal
# The Taiwanese Economic Journal is a prominent economics-focused media outlet based in Taiwan.
# They cover a range of topics related to the
# * Taiwanese and global economies
# * including finance, business
# * trade, and investment
# [Click here for dataset details
# ](https://www.kaggle.com/datasets/fedesoriano/company-bankruptcy-prediction)
#
# New libraries are installed from here
# Importing libraries required
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# import dataprep.eda as eda # EDA, Cleaning
from dataprep.eda import create_report
import seaborn as sns # Data visualization
import matplotlib.pyplot as plt # Data visualization
# Data analysis and ML Library
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import Lasso
from sklearn.feature_selection import mutual_info_classif
# File available
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# Functions
# # Lets take a look into data
company_df = pd.read_csv("/kaggle/input/company-bankruptcy-prediction/data.csv")
print(
"Number of Records : ",
company_df.shape[0],
"\nNumber of Features : ",
company_df.shape[1],
)
company_df.columns = company_df.columns.str.strip()
company_df.columns
company_df.head()
company_df.describe()
company_df.info()
# **Observations :**
# * All the given features are numeric *(int64 or float64)*
# * Column *Net Income Flag* and *Liability-Assets Flag* looks like a catogorical columns
# * There is no missing values
# Lets check for the presence of missing values
missing_value_count = pd.DataFrame(company_df.isna().sum())
missing_value_count.columns = ["Count"]
print(
"Total number of columns with missing values :",
len(missing_value_count[missing_value_count.Count > 0]),
)
# ### Categorical column distribution
pd.DataFrame(company_df["Net Income Flag"].value_counts()).plot.bar(
y="Net Income Flag", rot=0
)
plt.title("Net income flag distribution")
plt.show()
print("Net Income Flag Distribution\n")
print(pd.DataFrame(company_df["Net Income Flag"].value_counts()))
# Observations :
#
pd.DataFrame(company_df["Liability-Assets Flag"].value_counts()).plot.bar(
y="Liability-Assets Flag", rot=0
)
plt.title("Liability-Assets Flag distribution")
plt.show()
print("Liability-Assets Flag Distribution\n")
print(pd.DataFrame(company_df["Liability-Assets Flag"].value_counts()))
pd.DataFrame(company_df["Bankrupt?"].value_counts()).plot.bar(y="Bankrupt?", rot=0)
plt.title("Bankrupt distribution")
plt.show()
print("Bankrupt Distribution\n")
print(pd.DataFrame(company_df["Bankrupt?"].value_counts()))
# ### Too many columns to perform EDA
# In the bankruptcy data we have has 90+ columns. Carrying out EDA/Modelling on 90+ columns is a time and resource consuming process.
# #### Curse of dimensionality
#
#
# *The curse of dimensionality is like trying to find a single sock in a mountain of laundry. As the number of dimensions (socks) increases, the chances of finding a match (meaningful patterns) become increasingly elusive and your search turns into a chaotic mess*
# Lets narrow down our analysis to limited columns, so we will use **pearson correlation** to narrow down the number of columns based on linear relationships
# ### Columns with Linear relationship with Target variable
company_corr = pd.DataFrame(company_df.corr(numeric_only=True))
company_corr = pd.DataFrame(company_corr["Bankrupt?"])
# Remove specific indices, all 3 are categorical
indices_to_remove = ["Liability-Assets Flag", "Net Income Flag", "Bankrupt?"]
company_corr = company_corr.drop(indices_to_remove)
plt.figure(figsize=(8, 17))
sns.barplot(y=company_corr.index, x=company_corr["Bankrupt?"])
plt.title("Pearson correllation with Bankruptcy")
plt.show()
# Lets see what features has weak correlation to strong correlation (>|0.10|)
temp_corr = company_corr
temp_corr[["Bankrupt?"]] = abs(temp_corr[["Bankrupt?"]])
print("\nColumns with correlation (>|0.10|) :\n")
for i in temp_corr[(temp_corr["Bankrupt?"] > 0.10)].index:
print("* " + i + "\t")
# Select above mentioned features to find correlation between each other
correlated_features = list(temp_corr[(temp_corr["Bankrupt?"] > 0.10)].index) + [
"Bankrupt?"
]
corr_test = company_df[correlated_features]
plt.figure(figsize=(15, 15))
corr = corr_test.corr()
sns.heatmap(corr, cmap="crest", annot=True, fmt=".1f")
plt.show()
# Observations :
# * Lets remove columns with correlation with other columns = 1.0, which means both the columns convey same information
# Columns selected based on Linear relationship
selected_columns_set_linear = [
"ROA(A) before interest and % after tax",
"Net Value Per Share (A)",
"Debt ratio %",
"Working Capital to Total Assets",
"Current Liability to Current Assets",
"Net Income to Stockholder's Equity",
"Operating Gross Margin",
"Bankrupt?",
]
plt.figure(figsize=(10, 10))
sns.heatmap(
company_df[selected_columns_set_linear].corr(), cmap="crest", annot=True, fmt=".1f"
)
plt.show()
# We selected columns based on linear relationship with our target variable. In the next step we will use **Mutual information** to select columns based on [Non linear relationship](https://towardsdatascience.com/finding-and-visualising-non-linear-relationships-4ecd63a43e7e)
# ### Columns with Non-Linear relationship with Target variable
independent_variable = company_df.drop(["Bankrupt?"], axis=1)
target_variable = company_df[["Bankrupt?"]]
importances = mutual_info_classif(
independent_variable, pd.Series.ravel(target_variable)
)
importances = pd.Series(
importances, independent_variable.columns[0 : len(independent_variable.columns)]
)
importances = pd.DataFrame(
{"features": importances.index, "importance": importances.values}
)
# Mutual info
plt.figure(figsize=(20, 7))
sns.barplot(
data=importances,
x="features",
y="importance",
order=importances.sort_values("importance").features,
)
plt.xlabel("Mutual Importance")
plt.xticks(rotation=90)
plt.title("Mutual Importance of Columns")
plt.show()
# Lets select top 10 columns for EDA and Modelling
selected_columns_set_non_linear = np.array(
importances.nlargest(5, "importance").features
)
selected_columns = [*selected_columns_set_linear, *selected_columns_set_non_linear]
selected_columns = np.unique(selected_columns)
# Finaly we have selected following columns for our EDA and modelling :
# * Bankrupt?
# * Borrowing dependency
# * Current Liability to Current Assets
# * Debt ratio %
# * Net Income to Stockholder's Equity
# * Net Value Per Share (A)
# * Net profit before tax/Paid-in capital
# * Operating Gross Margin
# * Per Share Net profit before tax (Yuan ¥)
# * Persistent EPS in the Last Four Seasons
# * ROA(A) before interest and % after tax
# * Working Capital to Total Assets
# Before Jumping into EDA, lets understand what each column means :
# * **Borrowing dependency**: Borrowing dependency refers to a company's reliance on borrowed funds, such as loans or credit, to finance its operations or investments. It indicates the extent to which a company utilizes external debt to support its activities rather than relying solely on internal resources.
# * **Current Liability to Current Assets**: This ratio compares a company's current liabilities (obligations due within one year) to its current assets (assets expected to be converted into cash within one year). It provides an indication of a company's ability to meet its short-term obligations using its short-term assets. A higher ratio may suggest a greater risk of liquidity issues.
# * **Debt ratio %**: The debt ratio is a financial metric that compares a company's total debt to its total assets. It represents the proportion of a company's assets that are financed by debt. A higher debt ratio indicates a higher level of debt relative to assets, which may imply higher financial risk and reduced financial flexibility.
# * **Net Income to Stockholder's Equity**: This ratio, also known as return on equity (ROE), measures a company's profitability relative to its shareholders' equity. It indicates how effectively a company generates profit using the shareholders' investment. A higher ratio implies better profitability and efficient use of equity capital.
# * **Net Value Per Share (A)**: Net Value Per Share is a measure of a company's net assets (assets minus liabilities) divided by the total number of outstanding shares. It represents the per-share value of a company's net worth or book value.
# * **Net profit before tax/Paid-in capital**: This ratio compares a company's net profit before tax to its paid-in capital. It indicates the profitability generated by each unit of capital invested by shareholders.
# * **Operating Gross Margin**: Operating gross margin, also known as gross profit margin, measures the profitability of a company's core operations. It is calculated by dividing the gross profit (revenue minus the cost of goods sold) by the revenue. It represents the percentage of revenue that remains after deducting the direct costs associated with producing or delivering goods or services.
# * **Per Share Net profit before tax (Yuan ¥)**: Per Share Net profit before tax is the net profit before tax of a company divided by the total number of outstanding shares. It represents the earnings per share before tax.
# * **Persistent EPS in the Last Four Seasons**: Persistent EPS (Earnings Per Share) in the Last Four Seasons refers to the average earnings per share of a company over the past four fiscal quarters. It provides an indication of the company's sustained profitability over a specific period.
# * **ROA(A) before interest and % after tax**: Return on Assets (ROA) measures a company's ability to generate profit from its total assets. ROA(A) before interest and % after tax specifically refers to the return on assets before interest expenses and taxes. It indicates the profitability generated by each dollar of assets, excluding the impact of interest payments and taxes.
# * **Working Capital to Total Assets**: This ratio compares a company's working capital (current assets minus current liabilities) to its total assets. It evaluates the proportion of a company's total assets that are funded by its working capital. A higher ratio suggests a higher reliance on short-term assets to finance a company's operations.
# *Thank you ChatGPT for saving my time 🤖*
# ## Exploratory data analysis
bankruptcy_df = company_df[selected_columns]
bankruptcy_df.head()
create_report(bankruptcy_df)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/809/129809628.ipynb
|
company-bankruptcy-prediction
|
fedesoriano
|
[{"Id": 129809628, "ScriptId": 38228088, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6304122, "CreationDate": "05/16/2023 16:05:27", "VersionNumber": 5.0, "Title": "Bankruptcy - EDA & Modelling", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 238.0, "LinesInsertedFromPrevious": 74.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 164.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186183486, "KernelVersionId": 129809628, "SourceDatasetVersionId": 1938459}]
|
[{"Id": 1938459, "DatasetId": 1111894, "DatasourceVersionId": 1977115, "CreatorUserId": 6402661, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "02/13/2021 19:36:03", "VersionNumber": 2.0, "Title": "Company Bankruptcy Prediction", "Slug": "company-bankruptcy-prediction", "Subtitle": "Bankruptcy data from the Taiwan Economic Journal for the years 1999\u20132009", "Description": "### Similar Datasets\n\n- The Boston House-Price Data: [LINK](https://www.kaggle.com/fedesoriano/the-boston-houseprice-data)\n- Gender Pay Gap Dataset: [LINK](https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset)\n- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)\n\n\n### Context\n\nThe data were collected from the Taiwan Economic Journal for the years 1999 to 2009. Company bankruptcy was defined based on the business regulations of the Taiwan Stock Exchange.\n\n\n### Attribute Information\n\n**Version 2:** Updated column names and description to make the data easier to understand (Y = Output feature, X = Input features)\n\n\nY - Bankrupt?: Class label\nX1 - ROA(C) before interest and depreciation before interest: Return On Total Assets(C)\nX2 - ROA(A) before interest and % after tax: Return On Total Assets(A)\nX3 - ROA(B) before interest and depreciation after tax: Return On Total Assets(B)\nX4 - Operating Gross Margin: Gross Profit/Net Sales\nX5 - Realized Sales Gross Margin: Realized Gross Profit/Net Sales\nX6 - Operating Profit Rate: Operating Income/Net Sales\nX7 - Pre-tax net Interest Rate: Pre-Tax Income/Net Sales\nX8 - After-tax net Interest Rate: Net Income/Net Sales\nX9 - Non-industry income and expenditure/revenue: Net Non-operating Income Ratio\nX10 - Continuous interest rate (after tax): Net Income-Exclude Disposal Gain or Loss/Net Sales\nX11 - Operating Expense Rate: Operating Expenses/Net Sales\nX12 - Research and development expense rate: (Research and Development Expenses)/Net Sales\nX13 - Cash flow rate: Cash Flow from Operating/Current Liabilities\nX14 - Interest-bearing debt interest rate: Interest-bearing Debt/Equity\nX15 - Tax rate (A): Effective Tax Rate\nX16 - Net Value Per Share (B): Book Value Per Share(B)\nX17 - Net Value Per Share (A): Book Value Per Share(A)\nX18 - Net Value Per Share (C): Book Value Per Share(C)\nX19 - Persistent EPS in the Last Four Seasons: EPS-Net Income\nX20 - Cash Flow Per Share\nX21 - Revenue Per Share (Yuan \u00a5): Sales Per Share\nX22 - Operating Profit Per Share (Yuan \u00a5): Operating Income Per Share\nX23 - Per Share Net profit before tax (Yuan \u00a5): Pretax Income Per Share\nX24 - Realized Sales Gross Profit Growth Rate\nX25 - Operating Profit Growth Rate: Operating Income Growth\nX26 - After-tax Net Profit Growth Rate: Net Income Growth\nX27 - Regular Net Profit Growth Rate: Continuing Operating Income after Tax Growth\nX28 - Continuous Net Profit Growth Rate: Net Income-Excluding Disposal Gain or Loss Growth\nX29 - Total Asset Growth Rate: Total Asset Growth\nX30 - Net Value Growth Rate: Total Equity Growth\nX31 - Total Asset Return Growth Rate Ratio: Return on Total Asset Growth\nX32 - Cash Reinvestment %: Cash Reinvestment Ratio\nX33 - Current Ratio\nX34 - Quick Ratio: Acid Test\nX35 - Interest Expense Ratio: Interest Expenses/Total Revenue\nX36 - Total debt/Total net worth: Total Liability/Equity Ratio\nX37 - Debt ratio %: Liability/Total Assets\nX38 - Net worth/Assets: Equity/Total Assets\nX39 - Long-term fund suitability ratio (A): (Long-term Liability+Equity)/Fixed Assets\nX40 - Borrowing dependency: Cost of Interest-bearing Debt\nX41 - Contingent liabilities/Net worth: Contingent Liability/Equity\nX42 - Operating profit/Paid-in capital: Operating Income/Capital\nX43 - Net profit before tax/Paid-in capital: Pretax Income/Capital\nX44 - Inventory and accounts receivable/Net value: (Inventory+Accounts Receivables)/Equity\nX45 - Total Asset Turnover\nX46 - Accounts Receivable Turnover\nX47 - Average Collection Days: Days Receivable Outstanding\nX48 - Inventory Turnover Rate (times)\nX49 - Fixed Assets Turnover Frequency\nX50 - Net Worth Turnover Rate (times): Equity Turnover\nX51 - Revenue per person: Sales Per Employee\nX52 - Operating profit per person: Operation Income Per Employee\nX53 - Allocation rate per person: Fixed Assets Per Employee\nX54 - Working Capital to Total Assets\nX55 - Quick Assets/Total Assets\nX56 - Current Assets/Total Assets\nX57 - Cash/Total Assets\nX58 - Quick Assets/Current Liability\nX59 - Cash/Current Liability\nX60 - Current Liability to Assets\nX61 - Operating Funds to Liability\nX62 - Inventory/Working Capital\nX63 - Inventory/Current Liability\nX64 - Current Liabilities/Liability\nX65 - Working Capital/Equity\nX66 - Current Liabilities/Equity\nX67 - Long-term Liability to Current Assets\nX68 - Retained Earnings to Total Assets\nX69 - Total income/Total expense\nX70 - Total expense/Assets\nX71 - Current Asset Turnover Rate: Current Assets to Sales\nX72 - Quick Asset Turnover Rate: Quick Assets to Sales\nX73 - Working capitcal Turnover Rate: Working Capital to Sales\nX74 - Cash Turnover Rate: Cash to Sales\nX75 - Cash Flow to Sales\nX76 - Fixed Assets to Assets\nX77 - Current Liability to Liability\nX78 - Current Liability to Equity\nX79 - Equity to Long-term Liability\nX80 - Cash Flow to Total Assets\nX81 - Cash Flow to Liability\nX82 - CFO to Assets\nX83 - Cash Flow to Equity\nX84 - Current Liability to Current Assets\nX85 - Liability-Assets Flag: 1 if Total Liability exceeds Total Assets, 0 otherwise\nX86 - Net Income to Total Assets\nX87 - Total assets to GNP price\nX88 - No-credit Interval\nX89 - Gross Profit to Sales\nX90 - Net Income to Stockholder's Equity\nX91 - Liability to Equity\nX92 - Degree of Financial Leverage (DFL)\nX93 - Interest Coverage Ratio (Interest expense to EBIT)\nX94 - Net Income Flag: 1 if Net Income is Negative for the last two years, 0 otherwise\nX95 - Equity to Liability\n\n\n### Source\n\nDeron Liang and Chih-Fong Tsai, deronliang '@' gmail.com; cftsai '@' mgt.ncu.edu.tw, National Central University, Taiwan\nThe data was obtained from UCI Machine Learning Repository: [https://archive.ics.uci.edu/ml/datasets/Taiwanese+Bankruptcy+Prediction](https://archive.ics.uci.edu/ml/datasets/Taiwanese+Bankruptcy+Prediction)\n\n\n### Relevant Papers\n\nLiang, D., Lu, C.-C., Tsai, C.-F., and Shih, G.-A. (2016) Financial Ratios and Corporate Governance Indicators in Bankruptcy Prediction: A Comprehensive Study. European Journal of Operational Research, vol. 252, no. 2, pp. 561-572.\n[https://www.sciencedirect.com/science/article/pii/S0377221716000412](https://www.sciencedirect.com/science/article/pii/S0377221716000412)", "VersionNotes": "Updated column descriptions", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1111894, "CreatorUserId": 6402661, "OwnerUserId": 6402661.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 1938459.0, "CurrentDatasourceVersionId": 1977115.0, "ForumId": 1129209, "Type": 2, "CreationDate": "01/22/2021 08:05:03", "LastActivityDate": "01/22/2021", "TotalViews": 312149, "TotalDownloads": 32190, "TotalVotes": 610, "TotalKernels": 169}]
|
[{"Id": 6402661, "UserName": "fedesoriano", "DisplayName": "fedesoriano", "RegisterDate": "12/18/2020", "PerformanceTier": 4}]
|
# # Company Bankruptcy
# * Company bankruptcy occurs when a company cannot pay its debts and obligations to creditors, resulting in the company's assets being liquidated to repay those debts.
# * This can lead to the company ceasing operations and potentially going out of business.
# ## Why it is important to study bankruptcy
# * Studying bankruptcy is important because it is a significant economic event that affects individuals, businesses, and society as a whole.
# * Understanding bankruptcy can help make informed financial decisions and promote economic stability.
# ## About data source - Taiwan Economic Journal
# The Taiwanese Economic Journal is a prominent economics-focused media outlet based in Taiwan.
# They cover a range of topics related to the
# * Taiwanese and global economies
# * including finance, business
# * trade, and investment
# [Click here for dataset details
# ](https://www.kaggle.com/datasets/fedesoriano/company-bankruptcy-prediction)
#
# New libraries are installed from here
# Importing libraries required
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# import dataprep.eda as eda # EDA, Cleaning
from dataprep.eda import create_report
import seaborn as sns # Data visualization
import matplotlib.pyplot as plt # Data visualization
# Data analysis and ML Library
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import Lasso
from sklearn.feature_selection import mutual_info_classif
# File available
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# Functions
# # Lets take a look into data
company_df = pd.read_csv("/kaggle/input/company-bankruptcy-prediction/data.csv")
print(
"Number of Records : ",
company_df.shape[0],
"\nNumber of Features : ",
company_df.shape[1],
)
company_df.columns = company_df.columns.str.strip()
company_df.columns
company_df.head()
company_df.describe()
company_df.info()
# **Observations :**
# * All the given features are numeric *(int64 or float64)*
# * Column *Net Income Flag* and *Liability-Assets Flag* looks like a catogorical columns
# * There is no missing values
# Lets check for the presence of missing values
missing_value_count = pd.DataFrame(company_df.isna().sum())
missing_value_count.columns = ["Count"]
print(
"Total number of columns with missing values :",
len(missing_value_count[missing_value_count.Count > 0]),
)
# ### Categorical column distribution
pd.DataFrame(company_df["Net Income Flag"].value_counts()).plot.bar(
y="Net Income Flag", rot=0
)
plt.title("Net income flag distribution")
plt.show()
print("Net Income Flag Distribution\n")
print(pd.DataFrame(company_df["Net Income Flag"].value_counts()))
# Observations :
#
pd.DataFrame(company_df["Liability-Assets Flag"].value_counts()).plot.bar(
y="Liability-Assets Flag", rot=0
)
plt.title("Liability-Assets Flag distribution")
plt.show()
print("Liability-Assets Flag Distribution\n")
print(pd.DataFrame(company_df["Liability-Assets Flag"].value_counts()))
pd.DataFrame(company_df["Bankrupt?"].value_counts()).plot.bar(y="Bankrupt?", rot=0)
plt.title("Bankrupt distribution")
plt.show()
print("Bankrupt Distribution\n")
print(pd.DataFrame(company_df["Bankrupt?"].value_counts()))
# ### Too many columns to perform EDA
# In the bankruptcy data we have has 90+ columns. Carrying out EDA/Modelling on 90+ columns is a time and resource consuming process.
# #### Curse of dimensionality
#
#
# *The curse of dimensionality is like trying to find a single sock in a mountain of laundry. As the number of dimensions (socks) increases, the chances of finding a match (meaningful patterns) become increasingly elusive and your search turns into a chaotic mess*
# Lets narrow down our analysis to limited columns, so we will use **pearson correlation** to narrow down the number of columns based on linear relationships
# ### Columns with Linear relationship with Target variable
company_corr = pd.DataFrame(company_df.corr(numeric_only=True))
company_corr = pd.DataFrame(company_corr["Bankrupt?"])
# Remove specific indices, all 3 are categorical
indices_to_remove = ["Liability-Assets Flag", "Net Income Flag", "Bankrupt?"]
company_corr = company_corr.drop(indices_to_remove)
plt.figure(figsize=(8, 17))
sns.barplot(y=company_corr.index, x=company_corr["Bankrupt?"])
plt.title("Pearson correllation with Bankruptcy")
plt.show()
# Lets see what features has weak correlation to strong correlation (>|0.10|)
temp_corr = company_corr
temp_corr[["Bankrupt?"]] = abs(temp_corr[["Bankrupt?"]])
print("\nColumns with correlation (>|0.10|) :\n")
for i in temp_corr[(temp_corr["Bankrupt?"] > 0.10)].index:
print("* " + i + "\t")
# Select above mentioned features to find correlation between each other
correlated_features = list(temp_corr[(temp_corr["Bankrupt?"] > 0.10)].index) + [
"Bankrupt?"
]
corr_test = company_df[correlated_features]
plt.figure(figsize=(15, 15))
corr = corr_test.corr()
sns.heatmap(corr, cmap="crest", annot=True, fmt=".1f")
plt.show()
# Observations :
# * Lets remove columns with correlation with other columns = 1.0, which means both the columns convey same information
# Columns selected based on Linear relationship
selected_columns_set_linear = [
"ROA(A) before interest and % after tax",
"Net Value Per Share (A)",
"Debt ratio %",
"Working Capital to Total Assets",
"Current Liability to Current Assets",
"Net Income to Stockholder's Equity",
"Operating Gross Margin",
"Bankrupt?",
]
plt.figure(figsize=(10, 10))
sns.heatmap(
company_df[selected_columns_set_linear].corr(), cmap="crest", annot=True, fmt=".1f"
)
plt.show()
# We selected columns based on linear relationship with our target variable. In the next step we will use **Mutual information** to select columns based on [Non linear relationship](https://towardsdatascience.com/finding-and-visualising-non-linear-relationships-4ecd63a43e7e)
# ### Columns with Non-Linear relationship with Target variable
independent_variable = company_df.drop(["Bankrupt?"], axis=1)
target_variable = company_df[["Bankrupt?"]]
importances = mutual_info_classif(
independent_variable, pd.Series.ravel(target_variable)
)
importances = pd.Series(
importances, independent_variable.columns[0 : len(independent_variable.columns)]
)
importances = pd.DataFrame(
{"features": importances.index, "importance": importances.values}
)
# Mutual info
plt.figure(figsize=(20, 7))
sns.barplot(
data=importances,
x="features",
y="importance",
order=importances.sort_values("importance").features,
)
plt.xlabel("Mutual Importance")
plt.xticks(rotation=90)
plt.title("Mutual Importance of Columns")
plt.show()
# Lets select top 10 columns for EDA and Modelling
selected_columns_set_non_linear = np.array(
importances.nlargest(5, "importance").features
)
selected_columns = [*selected_columns_set_linear, *selected_columns_set_non_linear]
selected_columns = np.unique(selected_columns)
# Finaly we have selected following columns for our EDA and modelling :
# * Bankrupt?
# * Borrowing dependency
# * Current Liability to Current Assets
# * Debt ratio %
# * Net Income to Stockholder's Equity
# * Net Value Per Share (A)
# * Net profit before tax/Paid-in capital
# * Operating Gross Margin
# * Per Share Net profit before tax (Yuan ¥)
# * Persistent EPS in the Last Four Seasons
# * ROA(A) before interest and % after tax
# * Working Capital to Total Assets
# Before Jumping into EDA, lets understand what each column means :
# * **Borrowing dependency**: Borrowing dependency refers to a company's reliance on borrowed funds, such as loans or credit, to finance its operations or investments. It indicates the extent to which a company utilizes external debt to support its activities rather than relying solely on internal resources.
# * **Current Liability to Current Assets**: This ratio compares a company's current liabilities (obligations due within one year) to its current assets (assets expected to be converted into cash within one year). It provides an indication of a company's ability to meet its short-term obligations using its short-term assets. A higher ratio may suggest a greater risk of liquidity issues.
# * **Debt ratio %**: The debt ratio is a financial metric that compares a company's total debt to its total assets. It represents the proportion of a company's assets that are financed by debt. A higher debt ratio indicates a higher level of debt relative to assets, which may imply higher financial risk and reduced financial flexibility.
# * **Net Income to Stockholder's Equity**: This ratio, also known as return on equity (ROE), measures a company's profitability relative to its shareholders' equity. It indicates how effectively a company generates profit using the shareholders' investment. A higher ratio implies better profitability and efficient use of equity capital.
# * **Net Value Per Share (A)**: Net Value Per Share is a measure of a company's net assets (assets minus liabilities) divided by the total number of outstanding shares. It represents the per-share value of a company's net worth or book value.
# * **Net profit before tax/Paid-in capital**: This ratio compares a company's net profit before tax to its paid-in capital. It indicates the profitability generated by each unit of capital invested by shareholders.
# * **Operating Gross Margin**: Operating gross margin, also known as gross profit margin, measures the profitability of a company's core operations. It is calculated by dividing the gross profit (revenue minus the cost of goods sold) by the revenue. It represents the percentage of revenue that remains after deducting the direct costs associated with producing or delivering goods or services.
# * **Per Share Net profit before tax (Yuan ¥)**: Per Share Net profit before tax is the net profit before tax of a company divided by the total number of outstanding shares. It represents the earnings per share before tax.
# * **Persistent EPS in the Last Four Seasons**: Persistent EPS (Earnings Per Share) in the Last Four Seasons refers to the average earnings per share of a company over the past four fiscal quarters. It provides an indication of the company's sustained profitability over a specific period.
# * **ROA(A) before interest and % after tax**: Return on Assets (ROA) measures a company's ability to generate profit from its total assets. ROA(A) before interest and % after tax specifically refers to the return on assets before interest expenses and taxes. It indicates the profitability generated by each dollar of assets, excluding the impact of interest payments and taxes.
# * **Working Capital to Total Assets**: This ratio compares a company's working capital (current assets minus current liabilities) to its total assets. It evaluates the proportion of a company's total assets that are funded by its working capital. A higher ratio suggests a higher reliance on short-term assets to finance a company's operations.
# *Thank you ChatGPT for saving my time 🤖*
# ## Exploratory data analysis
bankruptcy_df = company_df[selected_columns]
bankruptcy_df.head()
create_report(bankruptcy_df)
|
[{"company-bankruptcy-prediction/data.csv": {"column_names": "[\"Bankrupt?\", \" ROA(C) before interest and depreciation before interest\", \" ROA(A) before interest and % after tax\", \" ROA(B) before interest and depreciation after tax\", \" Operating Gross Margin\", \" Realized Sales Gross Margin\", \" Operating Profit Rate\", \" Pre-tax net Interest Rate\", \" After-tax net Interest Rate\", \" Non-industry income and expenditure/revenue\", \" Continuous interest rate (after tax)\", \" Operating Expense Rate\", \" Research and development expense rate\", \" Cash flow rate\", \" Interest-bearing debt interest rate\", \" Tax rate (A)\", \" Net Value Per Share (B)\", \" Net Value Per Share (A)\", \" Net Value Per Share (C)\", \" Persistent EPS in the Last Four Seasons\", \" Cash Flow Per Share\", \" Revenue Per Share (Yuan \\u00a5)\", \" Operating Profit Per Share (Yuan \\u00a5)\", \" Per Share Net profit before tax (Yuan \\u00a5)\", \" Realized Sales Gross Profit Growth Rate\", \" Operating Profit Growth Rate\", \" After-tax Net Profit Growth Rate\", \" Regular Net Profit Growth Rate\", \" Continuous Net Profit Growth Rate\", \" Total Asset Growth Rate\", \" Net Value Growth Rate\", \" Total Asset Return Growth Rate Ratio\", \" Cash Reinvestment %\", \" Current Ratio\", \" Quick Ratio\", \" Interest Expense Ratio\", \" Total debt/Total net worth\", \" Debt ratio %\", \" Net worth/Assets\", \" Long-term fund suitability ratio (A)\", \" Borrowing dependency\", \" Contingent liabilities/Net worth\", \" Operating profit/Paid-in capital\", \" Net profit before tax/Paid-in capital\", \" Inventory and accounts receivable/Net value\", \" Total Asset Turnover\", \" Accounts Receivable Turnover\", \" Average Collection Days\", \" Inventory Turnover Rate (times)\", \" Fixed Assets Turnover Frequency\", \" Net Worth Turnover Rate (times)\", \" Revenue per person\", \" Operating profit per person\", \" Allocation rate per person\", \" Working Capital to Total Assets\", \" Quick Assets/Total Assets\", \" Current Assets/Total Assets\", \" Cash/Total Assets\", \" Quick Assets/Current Liability\", \" Cash/Current Liability\", \" Current Liability to Assets\", \" Operating Funds to Liability\", \" Inventory/Working Capital\", \" Inventory/Current Liability\", \" Current Liabilities/Liability\", \" Working Capital/Equity\", \" Current Liabilities/Equity\", \" Long-term Liability to Current Assets\", \" Retained Earnings to Total Assets\", \" Total income/Total expense\", \" Total expense/Assets\", \" Current Asset Turnover Rate\", \" Quick Asset Turnover Rate\", \" Working capitcal Turnover Rate\", \" Cash Turnover Rate\", \" Cash Flow to Sales\", \" Fixed Assets to Assets\", \" Current Liability to Liability\", \" Current Liability to Equity\", \" Equity to Long-term Liability\", \" Cash Flow to Total Assets\", \" Cash Flow to Liability\", \" CFO to Assets\", \" Cash Flow to Equity\", \" Current Liability to Current Assets\", \" Liability-Assets Flag\", \" Net Income to Total Assets\", \" Total assets to GNP price\", \" No-credit Interval\", \" Gross Profit to Sales\", \" Net Income to Stockholder's Equity\", \" Liability to Equity\", \" Degree of Financial Leverage (DFL)\", \" Interest Coverage Ratio (Interest expense to EBIT)\", \" Net Income Flag\", \" Equity to Liability\"]", "column_data_types": "{\"Bankrupt?\": \"int64\", \" ROA(C) before interest and depreciation before interest\": \"float64\", \" ROA(A) before interest and % after tax\": \"float64\", \" ROA(B) before interest and depreciation after tax\": \"float64\", \" Operating Gross Margin\": \"float64\", \" Realized Sales Gross Margin\": \"float64\", \" Operating Profit Rate\": \"float64\", \" Pre-tax net Interest Rate\": \"float64\", \" After-tax net Interest Rate\": \"float64\", \" Non-industry income and expenditure/revenue\": \"float64\", \" Continuous interest rate (after tax)\": \"float64\", \" Operating Expense Rate\": \"float64\", \" Research and development expense rate\": \"float64\", \" Cash flow rate\": \"float64\", \" Interest-bearing debt interest rate\": \"float64\", \" Tax rate (A)\": \"float64\", \" Net Value Per Share (B)\": \"float64\", \" Net Value Per Share (A)\": \"float64\", \" Net Value Per Share (C)\": \"float64\", \" Persistent EPS in the Last Four Seasons\": \"float64\", \" Cash Flow Per Share\": \"float64\", \" Revenue Per Share (Yuan \\u00a5)\": \"float64\", \" Operating Profit Per Share (Yuan \\u00a5)\": \"float64\", \" Per Share Net profit before tax (Yuan \\u00a5)\": \"float64\", \" Realized Sales Gross Profit Growth Rate\": \"float64\", \" Operating Profit Growth Rate\": \"float64\", \" After-tax Net Profit Growth Rate\": \"float64\", \" Regular Net Profit Growth Rate\": \"float64\", \" Continuous Net Profit Growth Rate\": \"float64\", \" Total Asset Growth Rate\": \"float64\", \" Net Value Growth Rate\": \"float64\", \" Total Asset Return Growth Rate Ratio\": \"float64\", \" Cash Reinvestment %\": \"float64\", \" Current Ratio\": \"float64\", \" Quick Ratio\": \"float64\", \" Interest Expense Ratio\": \"float64\", \" Total debt/Total net worth\": \"float64\", \" Debt ratio %\": \"float64\", \" Net worth/Assets\": \"float64\", \" Long-term fund suitability ratio (A)\": \"float64\", \" Borrowing dependency\": \"float64\", \" Contingent liabilities/Net worth\": \"float64\", \" Operating profit/Paid-in capital\": \"float64\", \" Net profit before tax/Paid-in capital\": \"float64\", \" Inventory and accounts receivable/Net value\": \"float64\", \" Total Asset Turnover\": \"float64\", \" Accounts Receivable Turnover\": \"float64\", \" Average Collection Days\": \"float64\", \" Inventory Turnover Rate (times)\": \"float64\", \" Fixed Assets Turnover Frequency\": \"float64\", \" Net Worth Turnover Rate (times)\": \"float64\", \" Revenue per person\": \"float64\", \" Operating profit per person\": \"float64\", \" Allocation rate per person\": \"float64\", \" Working Capital to Total Assets\": \"float64\", \" Quick Assets/Total Assets\": \"float64\", \" Current Assets/Total Assets\": \"float64\", \" Cash/Total Assets\": \"float64\", \" Quick Assets/Current Liability\": \"float64\", \" Cash/Current Liability\": \"float64\", \" Current Liability to Assets\": \"float64\", \" Operating Funds to Liability\": \"float64\", \" Inventory/Working Capital\": \"float64\", \" Inventory/Current Liability\": \"float64\", \" Current Liabilities/Liability\": \"float64\", \" Working Capital/Equity\": \"float64\", \" Current Liabilities/Equity\": \"float64\", \" Long-term Liability to Current Assets\": \"float64\", \" Retained Earnings to Total Assets\": \"float64\", \" Total income/Total expense\": \"float64\", \" Total expense/Assets\": \"float64\", \" Current Asset Turnover Rate\": \"float64\", \" Quick Asset Turnover Rate\": \"float64\", \" Working capitcal Turnover Rate\": \"float64\", \" Cash Turnover Rate\": \"float64\", \" Cash Flow to Sales\": \"float64\", \" Fixed Assets to Assets\": \"float64\", \" Current Liability to Liability\": \"float64\", \" Current Liability to Equity\": \"float64\", \" Equity to Long-term Liability\": \"float64\", \" Cash Flow to Total Assets\": \"float64\", \" Cash Flow to Liability\": \"float64\", \" CFO to Assets\": \"float64\", \" Cash Flow to Equity\": \"float64\", \" Current Liability to Current Assets\": \"float64\", \" Liability-Assets Flag\": \"int64\", \" Net Income to Total Assets\": \"float64\", \" Total assets to GNP price\": \"float64\", \" No-credit Interval\": \"float64\", \" Gross Profit to Sales\": \"float64\", \" Net Income to Stockholder's Equity\": \"float64\", \" Liability to Equity\": \"float64\", \" Degree of Financial Leverage (DFL)\": \"float64\", \" Interest Coverage Ratio (Interest expense to EBIT)\": \"float64\", \" Net Income Flag\": \"int64\", \" Equity to Liability\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6819 entries, 0 to 6818\nData columns (total 96 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Bankrupt? 6819 non-null int64 \n 1 ROA(C) before interest and depreciation before interest 6819 non-null float64\n 2 ROA(A) before interest and % after tax 6819 non-null float64\n 3 ROA(B) before interest and depreciation after tax 6819 non-null float64\n 4 Operating Gross Margin 6819 non-null float64\n 5 Realized Sales Gross Margin 6819 non-null float64\n 6 Operating Profit Rate 6819 non-null float64\n 7 Pre-tax net Interest Rate 6819 non-null float64\n 8 After-tax net Interest Rate 6819 non-null float64\n 9 Non-industry income and expenditure/revenue 6819 non-null float64\n 10 Continuous interest rate (after tax) 6819 non-null float64\n 11 Operating Expense Rate 6819 non-null float64\n 12 Research and development expense rate 6819 non-null float64\n 13 Cash flow rate 6819 non-null float64\n 14 Interest-bearing debt interest rate 6819 non-null float64\n 15 Tax rate (A) 6819 non-null float64\n 16 Net Value Per Share (B) 6819 non-null float64\n 17 Net Value Per Share (A) 6819 non-null float64\n 18 Net Value Per Share (C) 6819 non-null float64\n 19 Persistent EPS in the Last Four Seasons 6819 non-null float64\n 20 Cash Flow Per Share 6819 non-null float64\n 21 Revenue Per Share (Yuan \u00a5) 6819 non-null float64\n 22 Operating Profit Per Share (Yuan \u00a5) 6819 non-null float64\n 23 Per Share Net profit before tax (Yuan \u00a5) 6819 non-null float64\n 24 Realized Sales Gross Profit Growth Rate 6819 non-null float64\n 25 Operating Profit Growth Rate 6819 non-null float64\n 26 After-tax Net Profit Growth Rate 6819 non-null float64\n 27 Regular Net Profit Growth Rate 6819 non-null float64\n 28 Continuous Net Profit Growth Rate 6819 non-null float64\n 29 Total Asset Growth Rate 6819 non-null float64\n 30 Net Value Growth Rate 6819 non-null float64\n 31 Total Asset Return Growth Rate Ratio 6819 non-null float64\n 32 Cash Reinvestment % 6819 non-null float64\n 33 Current Ratio 6819 non-null float64\n 34 Quick Ratio 6819 non-null float64\n 35 Interest Expense Ratio 6819 non-null float64\n 36 Total debt/Total net worth 6819 non-null float64\n 37 Debt ratio % 6819 non-null float64\n 38 Net worth/Assets 6819 non-null float64\n 39 Long-term fund suitability ratio (A) 6819 non-null float64\n 40 Borrowing dependency 6819 non-null float64\n 41 Contingent liabilities/Net worth 6819 non-null float64\n 42 Operating profit/Paid-in capital 6819 non-null float64\n 43 Net profit before tax/Paid-in capital 6819 non-null float64\n 44 Inventory and accounts receivable/Net value 6819 non-null float64\n 45 Total Asset Turnover 6819 non-null float64\n 46 Accounts Receivable Turnover 6819 non-null float64\n 47 Average Collection Days 6819 non-null float64\n 48 Inventory Turnover Rate (times) 6819 non-null float64\n 49 Fixed Assets Turnover Frequency 6819 non-null float64\n 50 Net Worth Turnover Rate (times) 6819 non-null float64\n 51 Revenue per person 6819 non-null float64\n 52 Operating profit per person 6819 non-null float64\n 53 Allocation rate per person 6819 non-null float64\n 54 Working Capital to Total Assets 6819 non-null float64\n 55 Quick Assets/Total Assets 6819 non-null float64\n 56 Current Assets/Total Assets 6819 non-null float64\n 57 Cash/Total Assets 6819 non-null float64\n 58 Quick Assets/Current Liability 6819 non-null float64\n 59 Cash/Current Liability 6819 non-null float64\n 60 Current Liability to Assets 6819 non-null float64\n 61 Operating Funds to Liability 6819 non-null float64\n 62 Inventory/Working Capital 6819 non-null float64\n 63 Inventory/Current Liability 6819 non-null float64\n 64 Current Liabilities/Liability 6819 non-null float64\n 65 Working Capital/Equity 6819 non-null float64\n 66 Current Liabilities/Equity 6819 non-null float64\n 67 Long-term Liability to Current Assets 6819 non-null float64\n 68 Retained Earnings to Total Assets 6819 non-null float64\n 69 Total income/Total expense 6819 non-null float64\n 70 Total expense/Assets 6819 non-null float64\n 71 Current Asset Turnover Rate 6819 non-null float64\n 72 Quick Asset Turnover Rate 6819 non-null float64\n 73 Working capitcal Turnover Rate 6819 non-null float64\n 74 Cash Turnover Rate 6819 non-null float64\n 75 Cash Flow to Sales 6819 non-null float64\n 76 Fixed Assets to Assets 6819 non-null float64\n 77 Current Liability to Liability 6819 non-null float64\n 78 Current Liability to Equity 6819 non-null float64\n 79 Equity to Long-term Liability 6819 non-null float64\n 80 Cash Flow to Total Assets 6819 non-null float64\n 81 Cash Flow to Liability 6819 non-null float64\n 82 CFO to Assets 6819 non-null float64\n 83 Cash Flow to Equity 6819 non-null float64\n 84 Current Liability to Current Assets 6819 non-null float64\n 85 Liability-Assets Flag 6819 non-null int64 \n 86 Net Income to Total Assets 6819 non-null float64\n 87 Total assets to GNP price 6819 non-null float64\n 88 No-credit Interval 6819 non-null float64\n 89 Gross Profit to Sales 6819 non-null float64\n 90 Net Income to Stockholder's Equity 6819 non-null float64\n 91 Liability to Equity 6819 non-null float64\n 92 Degree of Financial Leverage (DFL) 6819 non-null float64\n 93 Interest Coverage Ratio (Interest expense to EBIT) 6819 non-null float64\n 94 Net Income Flag 6819 non-null int64 \n 95 Equity to Liability 6819 non-null float64\ndtypes: float64(93), int64(3)\nmemory usage: 5.0 MB\n", "summary": "{\"Bankrupt?\": {\"count\": 6819.0, \"mean\": 0.03226279513125092, \"std\": 0.17671017660774022, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \" ROA(C) before interest and depreciation before interest\": {\"count\": 6819.0, \"mean\": 0.5051796332417815, \"std\": 0.06068563875428444, \"min\": 0.0, \"25%\": 0.476527080388047, \"50%\": 0.502705601325988, \"75%\": 0.535562813825379, \"max\": 1.0}, \" ROA(A) before interest and % after tax\": {\"count\": 6819.0, \"mean\": 0.5586249158750463, \"std\": 0.06562003103170726, \"min\": 0.0, \"25%\": 0.53554295682512, \"50%\": 0.559801569995639, \"75%\": 0.58915721761884, \"max\": 1.0}, \" ROA(B) before interest and depreciation after tax\": {\"count\": 6819.0, \"mean\": 0.5535887093516657, \"std\": 0.06159480929187566, \"min\": 0.0, \"25%\": 0.527276620804112, \"50%\": 0.552277959205525, \"75%\": 0.584105144815033, \"max\": 1.0}, \" Operating Gross Margin\": {\"count\": 6819.0, \"mean\": 0.6079480383703836, \"std\": 0.01693381254822146, \"min\": 0.0, \"25%\": 0.6004446590466855, \"50%\": 0.605997492036495, \"75%\": 0.613914152697502, \"max\": 1.0}, \" Realized Sales Gross Margin\": {\"count\": 6819.0, \"mean\": 0.6079294691769791, \"std\": 0.01691607005567578, \"min\": 0.0, \"25%\": 0.600433848859165, \"50%\": 0.605975871661454, \"75%\": 0.6138420847806975, \"max\": 1.0}, \" Operating Profit Rate\": {\"count\": 6819.0, \"mean\": 0.9987551277900442, \"std\": 0.013010025092984125, \"min\": 0.0, \"25%\": 0.998969203197885, \"50%\": 0.999022239374566, \"75%\": 0.999094514164357, \"max\": 1.0}, \" Pre-tax net Interest Rate\": {\"count\": 6819.0, \"mean\": 0.7971897524712905, \"std\": 0.012868988419884597, \"min\": 0.0, \"25%\": 0.797385863236893, \"50%\": 0.797463610578231, \"75%\": 0.797578848185589, \"max\": 1.0}, \" After-tax net Interest Rate\": {\"count\": 6819.0, \"mean\": 0.8090835935135348, \"std\": 0.013600653945149043, \"min\": 0.0, \"25%\": 0.809311597146491, \"50%\": 0.809375198550956, \"75%\": 0.809469266134837, \"max\": 1.0}, \" Non-industry income and expenditure/revenue\": {\"count\": 6819.0, \"mean\": 0.303622923649734, \"std\": 0.011163439838128548, \"min\": 0.0, \"25%\": 0.30346627659685, \"50%\": 0.303525492830123, \"75%\": 0.303585192461218, \"max\": 1.0}, \" Continuous interest rate (after tax)\": {\"count\": 6819.0, \"mean\": 0.7813814325261418, \"std\": 0.012679004028913246, \"min\": 0.0, \"25%\": 0.7815668165898519, \"50%\": 0.781634957112874, \"75%\": 0.7817353784192015, \"max\": 1.0}, \" Operating Expense Rate\": {\"count\": 6819.0, \"mean\": 1995347312.8028853, \"std\": 3237683890.5223837, \"min\": 0.0, \"25%\": 0.0001566874492428, \"50%\": 0.0002777588583625, \"75%\": 4145000000.0, \"max\": 9990000000.0}, \" Research and development expense rate\": {\"count\": 6819.0, \"mean\": 1950427306.0568295, \"std\": 2598291553.9983206, \"min\": 0.0, \"25%\": 0.000128187953762, \"50%\": 509000000.0, \"75%\": 3450000000.0, \"max\": 9980000000.0}, \" Cash flow rate\": {\"count\": 6819.0, \"mean\": 0.4674311857796612, \"std\": 0.017035517308785362, \"min\": 0.0, \"25%\": 0.4615577531181065, \"50%\": 0.465079724549793, \"75%\": 0.471003917029432, \"max\": 1.0}, \" Interest-bearing debt interest rate\": {\"count\": 6819.0, \"mean\": 16448012.905942537, \"std\": 108275033.532824, \"min\": 0.0, \"25%\": 0.0002030203020302, \"50%\": 0.0003210321032103, \"75%\": 0.0005325532553255, \"max\": 990000000.0}, \" Tax rate (A)\": {\"count\": 6819.0, \"mean\": 0.11500074794142427, \"std\": 0.13866749672835132, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0734892195566353, \"75%\": 0.205840672132807, \"max\": 1.0}, \" Net Value Per Share (B)\": {\"count\": 6819.0, \"mean\": 0.19066057949747392, \"std\": 0.033389768351330985, \"min\": 0.0, \"25%\": 0.173612574269942, \"50%\": 0.184400151700308, \"75%\": 0.199570182461759, \"max\": 1.0}, \" Net Value Per Share (A)\": {\"count\": 6819.0, \"mean\": 0.19063317896774643, \"std\": 0.033473514172429, \"min\": 0.0, \"25%\": 0.173612574269942, \"50%\": 0.184400151700308, \"75%\": 0.199570182461759, \"max\": 1.0}, \" Net Value Per Share (C)\": {\"count\": 6819.0, \"mean\": 0.1906723702531618, \"std\": 0.03348013767040907, \"min\": 0.0, \"25%\": 0.1736757827314485, \"50%\": 0.184400151700308, \"75%\": 0.199612321436096, \"max\": 1.0}, \" Persistent EPS in the Last Four Seasons\": {\"count\": 6819.0, \"mean\": 0.22881285256452782, \"std\": 0.03326261307597689, \"min\": 0.0, \"25%\": 0.214711165736976, \"50%\": 0.22454382149948, \"75%\": 0.2388200813085, \"max\": 1.0}, \" Cash Flow Per Share\": {\"count\": 6819.0, \"mean\": 0.32348191216983224, \"std\": 0.01761091295834377, \"min\": 0.0, \"25%\": 0.317747754120393, \"50%\": 0.322487090613284, \"75%\": 0.3286234703260945, \"max\": 1.0}, \" Revenue Per Share (Yuan \\u00a5)\": {\"count\": 6819.0, \"mean\": 1328640.6020960682, \"std\": 51707089.76790663, \"min\": 0.0, \"25%\": 0.01563138073415305, \"50%\": 0.0273757127516373, \"75%\": 0.0463572152396509, \"max\": 3020000000.0}, \" Operating Profit Per Share (Yuan \\u00a5)\": {\"count\": 6819.0, \"mean\": 0.10909073887546925, \"std\": 0.027942244774416095, \"min\": 0.0, \"25%\": 0.0960833808321798, \"50%\": 0.104226040224737, \"75%\": 0.1161550362348345, \"max\": 1.0}, \" Per Share Net profit before tax (Yuan \\u00a5)\": {\"count\": 6819.0, \"mean\": 0.18436057764203345, \"std\": 0.03318020898090537, \"min\": 0.0, \"25%\": 0.170369812457634, \"50%\": 0.179709271672818, \"75%\": 0.193492505837162, \"max\": 1.0}, \" Realized Sales Gross Profit Growth Rate\": {\"count\": 6819.0, \"mean\": 0.02240785447416587, \"std\": 0.012079270152911608, \"min\": 0.0, \"25%\": 0.022064532735505453, \"50%\": 0.0221023731764072, \"75%\": 0.022153148426612798, \"max\": 1.0}, \" Operating Profit Growth Rate\": {\"count\": 6819.0, \"mean\": 0.8479799951688084, \"std\": 0.010752477405401351, \"min\": 0.0, \"25%\": 0.8479841081819834, \"50%\": 0.848043533745768, \"75%\": 0.8481225403945605, \"max\": 1.0}, \" After-tax Net Profit Growth Rate\": {\"count\": 6819.0, \"mean\": 0.6891461185681318, \"std\": 0.013853022260934765, \"min\": 0.0, \"25%\": 0.6892699337448115, \"50%\": 0.689438526343149, \"75%\": 0.6896471679790515, \"max\": 1.0}, \" Regular Net Profit Growth Rate\": {\"count\": 6819.0, \"mean\": 0.6891500117795616, \"std\": 0.0139102834140106, \"min\": 0.0, \"25%\": 0.689270265563206, \"50%\": 0.689438555196922, \"75%\": 0.6896470092832976, \"max\": 1.0}, \" Continuous Net Profit Growth Rate\": {\"count\": 6819.0, \"mean\": 0.21763901299696697, \"std\": 0.01006296314611611, \"min\": 0.0, \"25%\": 0.2175795122117655, \"50%\": 0.217598046961963, \"75%\": 0.217621501194243, \"max\": 1.0}, \" Total Asset Growth Rate\": {\"count\": 6819.0, \"mean\": 5508096595.248749, \"std\": 2897717771.1697035, \"min\": 0.0, \"25%\": 4860000000.0, \"50%\": 6400000000.0, \"75%\": 7390000000.0, \"max\": 9990000000.0}, \" Net Value Growth Rate\": {\"count\": 6819.0, \"mean\": 1566212.055241067, \"std\": 114159389.51834548, \"min\": 0.0, \"25%\": 0.0004409688868264, \"50%\": 0.0004619555222076, \"75%\": 0.000499362141038, \"max\": 9330000000.0}, \" Total Asset Return Growth Rate Ratio\": {\"count\": 6819.0, \"mean\": 0.2642475118758414, \"std\": 0.009634208862611605, \"min\": 0.0, \"25%\": 0.263758926420651, \"50%\": 0.264049545034229, \"75%\": 0.264388341065032, \"max\": 1.0}, \" Cash Reinvestment %\": {\"count\": 6819.0, \"mean\": 0.37967667232266245, \"std\": 0.020736565809616806, \"min\": 0.0, \"25%\": 0.37474851905666695, \"50%\": 0.380425468499683, \"75%\": 0.386731120301032, \"max\": 1.0}, \" Current Ratio\": {\"count\": 6819.0, \"mean\": 403284.954244977, \"std\": 33302155.825480215, \"min\": 0.0, \"25%\": 0.00755504663011965, \"50%\": 0.0105871744549939, \"75%\": 0.0162695280201934, \"max\": 2750000000.0}, \" Quick Ratio\": {\"count\": 6819.0, \"mean\": 8376594.819684891, \"std\": 244684748.44687235, \"min\": 0.0, \"25%\": 0.004725903227376101, \"50%\": 0.0074124720675444, \"75%\": 0.01224910697241505, \"max\": 9230000000.0}, \" Interest Expense Ratio\": {\"count\": 6819.0, \"mean\": 0.6309910117124214, \"std\": 0.011238461504050156, \"min\": 0.0, \"25%\": 0.63061225188696, \"50%\": 0.630698209613567, \"75%\": 0.631125258558102, \"max\": 1.0}, \" Total debt/Total net worth\": {\"count\": 6819.0, \"mean\": 4416336.714259365, \"std\": 168406905.28151134, \"min\": 0.0, \"25%\": 0.0030070491250148, \"50%\": 0.005546284390702, \"75%\": 0.00927329266179695, \"max\": 9940000000.0}, \" Debt ratio %\": {\"count\": 6819.0, \"mean\": 0.11317708497306007, \"std\": 0.05392030606308283, \"min\": 0.0, \"25%\": 0.0728905281615624, \"50%\": 0.111406717658796, \"75%\": 0.148804305106267, \"max\": 1.0}, \" Net worth/Assets\": {\"count\": 6819.0, \"mean\": 0.8868229150269401, \"std\": 0.05392030606308284, \"min\": 0.0, \"25%\": 0.8511956948937329, \"50%\": 0.888593282341204, \"75%\": 0.927109471838438, \"max\": 1.0}, \" Long-term fund suitability ratio (A)\": {\"count\": 6819.0, \"mean\": 0.00878273381503679, \"std\": 0.028152926049290605, \"min\": 0.0, \"25%\": 0.0052436836906082, \"50%\": 0.0056646361117639, \"75%\": 0.00684743246553585, \"max\": 1.0}, \" Borrowing dependency\": {\"count\": 6819.0, \"mean\": 0.37465429459872324, \"std\": 0.016286163355500444, \"min\": 0.0, \"25%\": 0.3701678435547765, \"50%\": 0.372624322553083, \"75%\": 0.3762707372009225, \"max\": 1.0}, \" Contingent liabilities/Net worth\": {\"count\": 6819.0, \"mean\": 0.0059682772664790325, \"std\": 0.012188361875857312, \"min\": 0.0, \"25%\": 0.0053658477137564, \"50%\": 0.0053658477137564, \"75%\": 0.00576435604952715, \"max\": 1.0}, \" Operating profit/Paid-in capital\": {\"count\": 6819.0, \"mean\": 0.10897668140338518, \"std\": 0.02778168598564047, \"min\": 0.0, \"25%\": 0.0961046786197013, \"50%\": 0.104133079290635, \"75%\": 0.115927337274252, \"max\": 1.0}, \" Net profit before tax/Paid-in capital\": {\"count\": 6819.0, \"mean\": 0.18271502907673604, \"std\": 0.030784771508309793, \"min\": 0.0, \"25%\": 0.169376366789835, \"50%\": 0.178455621747983, \"75%\": 0.191606967800317, \"max\": 1.0}, \" Inventory and accounts receivable/Net value\": {\"count\": 6819.0, \"mean\": 0.40245933052066923, \"std\": 0.013324079587932275, \"min\": 0.0, \"25%\": 0.3974026791778925, \"50%\": 0.40013102490143, \"75%\": 0.404550770809581, \"max\": 1.0}, \" Total Asset Turnover\": {\"count\": 6819.0, \"mean\": 0.14160561602172958, \"std\": 0.1011449684929233, \"min\": 0.0, \"25%\": 0.0764617691154423, \"50%\": 0.118440779610195, \"75%\": 0.176911544227886, \"max\": 1.0}, \" Accounts Receivable Turnover\": {\"count\": 6819.0, \"mean\": 12789705.237553563, \"std\": 278259836.9840667, \"min\": 0.0, \"25%\": 0.0007101336065656, \"50%\": 0.0009678106580909, \"75%\": 0.0014547594168788, \"max\": 9740000000.0}, \" Average Collection Days\": {\"count\": 6819.0, \"mean\": 9826220.861191595, \"std\": 256358895.70533204, \"min\": 0.0, \"25%\": 0.0043865304397204, \"50%\": 0.0065725374332349, \"75%\": 0.00897287558119175, \"max\": 9730000000.0}, \" Inventory Turnover Rate (times)\": {\"count\": 6819.0, \"mean\": 2149106056.607619, \"std\": 3247967014.047812, \"min\": 0.0, \"25%\": 0.0001728255554827, \"50%\": 0.0007646742653862, \"75%\": 4620000000.0, \"max\": 9990000000.0}, \" Fixed Assets Turnover Frequency\": {\"count\": 6819.0, \"mean\": 1008595981.8175156, \"std\": 2477557316.9201517, \"min\": 0.0, \"25%\": 0.0002330013064716, \"50%\": 0.000593094234655, \"75%\": 0.0036523711287173, \"max\": 9990000000.0}, \" Net Worth Turnover Rate (times)\": {\"count\": 6819.0, \"mean\": 0.038595054614951586, \"std\": 0.036680343560413615, \"min\": 0.0, \"25%\": 0.0217741935483871, \"50%\": 0.0295161290322581, \"75%\": 0.0429032258064516, \"max\": 1.0}, \" Revenue per person\": {\"count\": 6819.0, \"mean\": 2325854.266358276, \"std\": 136632654.3899363, \"min\": 0.0, \"25%\": 0.010432854016421151, \"50%\": 0.0186155134174464, \"75%\": 0.0358547655068079, \"max\": 8810000000.0}, \" Operating profit per person\": {\"count\": 6819.0, \"mean\": 0.40067101508133507, \"std\": 0.032720144194699534, \"min\": 0.0, \"25%\": 0.392437981954275, \"50%\": 0.395897876574478, \"75%\": 0.40185093055335697, \"max\": 1.0}, \" Allocation rate per person\": {\"count\": 6819.0, \"mean\": 11255785.321742088, \"std\": 294506294.11677057, \"min\": 0.0, \"25%\": 0.004120528997963601, \"50%\": 0.0078443733586557, \"75%\": 0.015020308976719, \"max\": 9570000000.0}, \" Working Capital to Total Assets\": {\"count\": 6819.0, \"mean\": 0.814125170261333, \"std\": 0.0590544026482635, \"min\": 0.0, \"25%\": 0.774308962762401, \"50%\": 0.81027522898466, \"75%\": 0.8503828485419616, \"max\": 1.0}, \" Quick Assets/Total Assets\": {\"count\": 6819.0, \"mean\": 0.4001318123650569, \"std\": 0.20199806668068215, \"min\": 0.0, \"25%\": 0.24197285659394002, \"50%\": 0.386450924981744, \"75%\": 0.540593673285078, \"max\": 1.0}, \" Current Assets/Total Assets\": {\"count\": 6819.0, \"mean\": 0.5222734467680338, \"std\": 0.21811182151419323, \"min\": 0.0, \"25%\": 0.35284541721511353, \"50%\": 0.514829793890847, \"75%\": 0.6890506806831516, \"max\": 1.0}, \" Cash/Total Assets\": {\"count\": 6819.0, \"mean\": 0.12409456048965214, \"std\": 0.13925058358332645, \"min\": 0.0, \"25%\": 0.03354322123979425, \"50%\": 0.0748874639354301, \"75%\": 0.1610731518633315, \"max\": 1.0}, \" Quick Assets/Current Liability\": {\"count\": 6819.0, \"mean\": 3592902.1968296515, \"std\": 171620908.60682163, \"min\": 0.0, \"25%\": 0.00523977582664085, \"50%\": 0.0079088979804512, \"75%\": 0.0129509103075746, \"max\": 8820000000.0}, \" Cash/Current Liability\": {\"count\": 6819.0, \"mean\": 37159994.147133335, \"std\": 510350903.16273063, \"min\": 0.0, \"25%\": 0.0019730075415488497, \"50%\": 0.0049038864700734, \"75%\": 0.0128055731079178, \"max\": 9650000000.0}, \" Current Liability to Assets\": {\"count\": 6819.0, \"mean\": 0.0906727945676238, \"std\": 0.05028985666891821, \"min\": 0.0, \"25%\": 0.0533012764320206, \"50%\": 0.0827047949822228, \"75%\": 0.1195229934695275, \"max\": 1.0}, \" Operating Funds to Liability\": {\"count\": 6819.0, \"mean\": 0.35382800412158655, \"std\": 0.035147184179188065, \"min\": 0.0, \"25%\": 0.34102297735578047, \"50%\": 0.348596657106137, \"75%\": 0.3609148870133705, \"max\": 1.0}, \" Inventory/Working Capital\": {\"count\": 6819.0, \"mean\": 0.27739510610233165, \"std\": 0.010468846972945228, \"min\": 0.0, \"25%\": 0.2770339694810945, \"50%\": 0.277177699032242, \"75%\": 0.2774287054274715, \"max\": 1.0}, \" Inventory/Current Liability\": {\"count\": 6819.0, \"mean\": 55806804.52577958, \"std\": 582051554.6194191, \"min\": 0.0, \"25%\": 0.0031631476746991002, \"50%\": 0.0064973353534734, \"75%\": 0.011146766748190151, \"max\": 9910000000.0}, \" Current Liabilities/Liability\": {\"count\": 6819.0, \"mean\": 0.7615988775853332, \"std\": 0.20667676768344223, \"min\": 0.0, \"25%\": 0.6269807662218725, \"50%\": 0.806881404713333, \"75%\": 0.942026693700069, \"max\": 1.0}, \" Working Capital/Equity\": {\"count\": 6819.0, \"mean\": 0.7358165257322183, \"std\": 0.011678026475599061, \"min\": 0.0, \"25%\": 0.733611818564342, \"50%\": 0.736012732265696, \"75%\": 0.738559910578823, \"max\": 1.0}, \" Current Liabilities/Equity\": {\"count\": 6819.0, \"mean\": 0.33140980061698827, \"std\": 0.013488027908897866, \"min\": 0.0, \"25%\": 0.328095841686878, \"50%\": 0.329685133135929, \"75%\": 0.332322404809702, \"max\": 1.0}, \" Long-term Liability to Current Assets\": {\"count\": 6819.0, \"mean\": 54160038.13589435, \"std\": 570270621.9592104, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0019746187761809, \"75%\": 0.009005945944256601, \"max\": 9540000000.0}, \" Retained Earnings to Total Assets\": {\"count\": 6819.0, \"mean\": 0.9347327541270043, \"std\": 0.025564221690643103, \"min\": 0.0, \"25%\": 0.9310965081459854, \"50%\": 0.937672322031461, \"75%\": 0.9448112860939986, \"max\": 1.0}, \" Total income/Total expense\": {\"count\": 6819.0, \"mean\": 0.002548945567386416, \"std\": 0.01209281469621801, \"min\": 0.0, \"25%\": 0.0022355962096577498, \"50%\": 0.0023361709310448, \"75%\": 0.0024918511193838, \"max\": 1.0}, \" Total expense/Assets\": {\"count\": 6819.0, \"mean\": 0.02918409925586063, \"std\": 0.02714877679286165, \"min\": 0.0, \"25%\": 0.01456705658927065, \"50%\": 0.0226739487842648, \"75%\": 0.035930137895265155, \"max\": 1.0}, \" Current Asset Turnover Rate\": {\"count\": 6819.0, \"mean\": 1195855763.3089354, \"std\": 2821161238.262308, \"min\": 0.0, \"25%\": 0.00014562362973865, \"50%\": 0.0001987815566631, \"75%\": 0.0004525945407579, \"max\": 10000000000.0}, \" Quick Asset Turnover Rate\": {\"count\": 6819.0, \"mean\": 2163735272.034426, \"std\": 3374944402.166023, \"min\": 0.0, \"25%\": 0.00014171486236355001, \"50%\": 0.0002247727878357, \"75%\": 4900000000.0, \"max\": 10000000000.0}, \" Working capitcal Turnover Rate\": {\"count\": 6819.0, \"mean\": 0.5940062655659162, \"std\": 0.008959384178922204, \"min\": 0.0, \"25%\": 0.5939344215587965, \"50%\": 0.593962767104877, \"75%\": 0.5940023454696105, \"max\": 1.0}, \" Cash Turnover Rate\": {\"count\": 6819.0, \"mean\": 2471976967.444318, \"std\": 2938623226.6788445, \"min\": 0.0, \"25%\": 0.0002735337396781, \"50%\": 1080000000.0, \"75%\": 4510000000.0, \"max\": 10000000000.0}, \" Cash Flow to Sales\": {\"count\": 6819.0, \"mean\": 0.6715307810992098, \"std\": 0.0093413456183006, \"min\": 0.0, \"25%\": 0.671565259253275, \"50%\": 0.671573958092574, \"75%\": 0.671586580417158, \"max\": 1.0}, \" Fixed Assets to Assets\": {\"count\": 6819.0, \"mean\": 1220120.5015895537, \"std\": 100754158.71316808, \"min\": 0.0, \"25%\": 0.0853603651897917, \"50%\": 0.196881048224411, \"75%\": 0.3721999782647555, \"max\": 8320000000.0}, \" Current Liability to Liability\": {\"count\": 6819.0, \"mean\": 0.7615988775853332, \"std\": 0.20667676768344223, \"min\": 0.0, \"25%\": 0.6269807662218725, \"50%\": 0.806881404713333, \"75%\": 0.942026693700069, \"max\": 1.0}, \" Current Liability to Equity\": {\"count\": 6819.0, \"mean\": 0.33140980061698827, \"std\": 0.013488027908897866, \"min\": 0.0, \"25%\": 0.328095841686878, \"50%\": 0.329685133135929, \"75%\": 0.332322404809702, \"max\": 1.0}, \" Equity to Long-term Liability\": {\"count\": 6819.0, \"mean\": 0.11564465149636942, \"std\": 0.019529176275314197, \"min\": 0.0, \"25%\": 0.110933233663468, \"50%\": 0.112340004024972, \"75%\": 0.117106091075626, \"max\": 1.0}, \" Cash Flow to Total Assets\": {\"count\": 6819.0, \"mean\": 0.6497305901792364, \"std\": 0.04737213191450496, \"min\": 0.0, \"25%\": 0.633265319013864, \"50%\": 0.645366460270721, \"75%\": 0.6630618534616091, \"max\": 1.0}, \" Cash Flow to Liability\": {\"count\": 6819.0, \"mean\": 0.4618492532922571, \"std\": 0.029942680345244794, \"min\": 0.0, \"25%\": 0.4571164765642225, \"50%\": 0.459750137932885, \"75%\": 0.46423584697152853, \"max\": 1.0}, \" CFO to Assets\": {\"count\": 6819.0, \"mean\": 0.5934150861096208, \"std\": 0.05856055014224858, \"min\": 0.0, \"25%\": 0.5659869401753586, \"50%\": 0.593266274083544, \"75%\": 0.6247688757833555, \"max\": 1.0}, \" Cash Flow to Equity\": {\"count\": 6819.0, \"mean\": 0.3155823898995751, \"std\": 0.01296089240164725, \"min\": 0.0, \"25%\": 0.312994699600273, \"50%\": 0.314952752072916, \"75%\": 0.317707188742567, \"max\": 1.0}, \" Current Liability to Current Assets\": {\"count\": 6819.0, \"mean\": 0.031506365747440736, \"std\": 0.030844688453563848, \"min\": 0.0, \"25%\": 0.018033665707965, \"50%\": 0.0275971428517009, \"75%\": 0.0383746158541899, \"max\": 1.0}, \" Liability-Assets Flag\": {\"count\": 6819.0, \"mean\": 0.001173192550227306, \"std\": 0.034234310865302146, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \" Net Income to Total Assets\": {\"count\": 6819.0, \"mean\": 0.8077602200365486, \"std\": 0.040332191531426226, \"min\": 0.0, \"25%\": 0.7967498491931705, \"50%\": 0.810619042075101, \"75%\": 0.8264545295408715, \"max\": 1.0}, \" Total assets to GNP price\": {\"count\": 6819.0, \"mean\": 18629417.81183602, \"std\": 376450059.7458224, \"min\": 0.0, \"25%\": 0.0009036204813306, \"50%\": 0.0020852127088157, \"75%\": 0.0052697768568805, \"max\": 9820000000.0}, \" No-credit Interval\": {\"count\": 6819.0, \"mean\": 0.623914574767534, \"std\": 0.012289548007412282, \"min\": 0.0, \"25%\": 0.623636304973909, \"50%\": 0.623879225987712, \"75%\": 0.6241681927893561, \"max\": 1.0}, \" Gross Profit to Sales\": {\"count\": 6819.0, \"mean\": 0.607946340270717, \"std\": 0.016933807795673647, \"min\": 0.0, \"25%\": 0.6004428952063054, \"50%\": 0.605998288167218, \"75%\": 0.613913271038147, \"max\": 1.0}, \" Net Income to Stockholder's Equity\": {\"count\": 6819.0, \"mean\": 0.8404020646301005, \"std\": 0.01452252608252491, \"min\": 0.0, \"25%\": 0.8401148040637195, \"50%\": 0.841178760250192, \"75%\": 0.8423569700412374, \"max\": 1.0}, \" Liability to Equity\": {\"count\": 6819.0, \"mean\": 0.2803651538333931, \"std\": 0.014463223575594045, \"min\": 0.0, \"25%\": 0.276944242646329, \"50%\": 0.278777583629637, \"75%\": 0.2814491856088265, \"max\": 1.0}, \" Degree of Financial Leverage (DFL)\": {\"count\": 6819.0, \"mean\": 0.027541119421203627, \"std\": 0.01566794186642967, \"min\": 0.0, \"25%\": 0.0267911566924924, \"50%\": 0.0268081258982465, \"75%\": 0.026913184214613348, \"max\": 1.0}, \" Interest Coverage Ratio (Interest expense to EBIT)\": {\"count\": 6819.0, \"mean\": 0.5653579335465574, \"std\": 0.013214239761961918, \"min\": 0.0, \"25%\": 0.565158395757604, \"50%\": 0.565251928758969, \"75%\": 0.565724709506105, \"max\": 1.0}, \" Net Income Flag\": {\"count\": 6819.0, \"mean\": 1.0, \"std\": 0.0, \"min\": 1.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}, \" Equity to Liability\": {\"count\": 6819.0, \"mean\": 0.047578356529497656, \"std\": 0.05001371618013796, \"min\": 0.0, \"25%\": 0.024476693570910098, \"50%\": 0.0337976972031022, \"75%\": 0.052837817459331596, \"max\": 1.0}}", "examples": "{\"Bankrupt?\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1},\" ROA(C) before interest and depreciation before interest\":{\"0\":0.3705942573,\"1\":0.4642909375,\"2\":0.4260712719,\"3\":0.3998440014},\" ROA(A) before interest and % after tax\":{\"0\":0.4243894461,\"1\":0.53821413,\"2\":0.4990187527,\"3\":0.4512647187},\" ROA(B) before interest and depreciation after tax\":{\"0\":0.4057497725,\"1\":0.5167300177,\"2\":0.4722950907,\"3\":0.4577332834},\" Operating Gross Margin\":{\"0\":0.6014572133,\"1\":0.6102350855,\"2\":0.6014500065,\"3\":0.5835411292},\" Realized Sales Gross Margin\":{\"0\":0.6014572133,\"1\":0.6102350855,\"2\":0.601363525,\"3\":0.5835411292},\" Operating Profit Rate\":{\"0\":0.9989692032,\"1\":0.9989459782,\"2\":0.9988573535,\"3\":0.9986997471},\" Pre-tax net Interest Rate\":{\"0\":0.7968871459,\"1\":0.7973801913,\"2\":0.7964033693,\"3\":0.7969669683},\" After-tax net Interest Rate\":{\"0\":0.8088093609,\"1\":0.8093007257,\"2\":0.8083875215,\"3\":0.8089655977},\" Non-industry income and expenditure\\/revenue\":{\"0\":0.3026464339,\"1\":0.3035564303,\"2\":0.3020351773,\"3\":0.303349536},\" Continuous interest rate (after tax)\":{\"0\":0.7809848502,\"1\":0.7815059743,\"2\":0.7802839362,\"3\":0.7812409912},\" Operating Expense Rate\":{\"0\":0.0001256969,\"1\":0.0002897851,\"2\":0.0002361297,\"3\":0.0001078888},\" Research and development expense rate\":{\"0\":0.0,\"1\":0.0,\"2\":25500000.0,\"3\":0.0},\" Cash flow rate\":{\"0\":0.4581431435,\"1\":0.4618672572,\"2\":0.4585205875,\"3\":0.4657054427},\" Interest-bearing debt interest rate\":{\"0\":0.0007250725,\"1\":0.0006470647,\"2\":0.000790079,\"3\":0.0004490449},\" Tax rate (A)\":{\"0\":0.0,\"1\":0.0,\"2\":0.0,\"3\":0.0},\" Net Value Per Share (B)\":{\"0\":0.1479499389,\"1\":0.182251064,\"2\":0.1779107497,\"3\":0.1541865071},\" Net Value Per Share (A)\":{\"0\":0.1479499389,\"1\":0.182251064,\"2\":0.1779107497,\"3\":0.1541865071},\" Net Value Per Share (C)\":{\"0\":0.1479499389,\"1\":0.182251064,\"2\":0.193712865,\"3\":0.1541865071},\" Persistent EPS in the Last Four Seasons\":{\"0\":0.1691405881,\"1\":0.208943935,\"2\":0.1805805049,\"3\":0.1937222275},\" Cash Flow Per Share\":{\"0\":0.3116644267,\"1\":0.3181368041,\"2\":0.3071019311,\"3\":0.3216736224},\" Revenue Per Share (Yuan \\u00a5)\":{\"0\":0.0175597804,\"1\":0.021144335,\"2\":0.0059440083,\"3\":0.014368468},\" Operating Profit Per Share (Yuan \\u00a5)\":{\"0\":0.0959205276,\"1\":0.0937220096,\"2\":0.0923377575,\"3\":0.0777623972},\" Per Share Net profit before tax (Yuan \\u00a5)\":{\"0\":0.1387361603,\"1\":0.1699179031,\"2\":0.1428033441,\"3\":0.148602847},\" Realized Sales Gross Profit Growth Rate\":{\"0\":0.0221022784,\"1\":0.0220801699,\"2\":0.0227600968,\"3\":0.0220460669},\" Operating Profit Growth Rate\":{\"0\":0.8481949945,\"1\":0.8480878838,\"2\":0.8480940128,\"3\":0.8480054774},\" After-tax Net Profit Growth Rate\":{\"0\":0.6889794628,\"1\":0.6896929012,\"2\":0.689462677,\"3\":0.6891095356},\" Regular Net Profit Growth Rate\":{\"0\":0.6889794628,\"1\":0.6897017016,\"2\":0.6894696596,\"3\":0.6891095356},\" Continuous Net Profit Growth Rate\":{\"0\":0.2175353862,\"1\":0.2176195965,\"2\":0.217601299,\"3\":0.2175681883},\" Total Asset Growth Rate\":{\"0\":4980000000.0,\"1\":6110000000.0,\"2\":7280000000.0,\"3\":4880000000.0},\" Net Value Growth Rate\":{\"0\":0.0003269773,\"1\":0.0004430401,\"2\":0.0003964253,\"3\":0.0003824259},\" Total Asset Return Growth Rate Ratio\":{\"0\":0.2630999837,\"1\":0.2645157781,\"2\":0.2641839756,\"3\":0.2633711759},\" Cash Reinvestment %\":{\"0\":0.363725271,\"1\":0.376709139,\"2\":0.3689132298,\"3\":0.3840765992},\" Current Ratio\":{\"0\":0.0022589633,\"1\":0.0060162059,\"2\":0.0115425537,\"3\":0.0041940587},\" Quick Ratio\":{\"0\":0.0012077551,\"1\":0.0040393668,\"2\":0.0053475602,\"3\":0.0028964911},\" Interest Expense Ratio\":{\"0\":0.629951302,\"1\":0.6351724634,\"2\":0.6296314434,\"3\":0.630228353},\" Total debt\\/Total net worth\":{\"0\":0.0212659244,\"1\":0.0125023938,\"2\":0.021247686,\"3\":0.0095724017},\" Debt ratio %\":{\"0\":0.2075762615,\"1\":0.1711763461,\"2\":0.2075157965,\"3\":0.151464764},\" Net worth\\/Assets\":{\"0\":0.7924237385,\"1\":0.8288236539,\"2\":0.7924842035,\"3\":0.848535236},\" Long-term fund suitability ratio (A)\":{\"0\":0.0050244547,\"1\":0.0050588818,\"2\":0.0050998994,\"3\":0.0050469241},\" Borrowing dependency\":{\"0\":0.3902843544,\"1\":0.37676002,\"2\":0.3790929201,\"3\":0.3797426876},\" Contingent liabilities\\/Net worth\":{\"0\":0.0064785025,\"1\":0.0058350395,\"2\":0.0065619821,\"3\":0.0053658477},\" Operating profit\\/Paid-in capital\":{\"0\":0.095884834,\"1\":0.0937433843,\"2\":0.0923184653,\"3\":0.0777272949},\" Net profit before tax\\/Paid-in capital\":{\"0\":0.1377573335,\"1\":0.1689616168,\"2\":0.1480355931,\"3\":0.1475605158},\" Inventory and accounts receivable\\/Net value\":{\"0\":0.3980356983,\"1\":0.3977248825,\"2\":0.406580451,\"3\":0.3979245013},\" Total Asset Turnover\":{\"0\":0.0869565217,\"1\":0.0644677661,\"2\":0.0149925037,\"3\":0.0899550225},\" Accounts Receivable Turnover\":{\"0\":0.0018138841,\"1\":0.0012863563,\"2\":0.0014953385,\"3\":0.0019660556},\" Average Collection Days\":{\"0\":0.0034873643,\"1\":0.0049168079,\"2\":0.0042268495,\"3\":0.0032149673},\" Inventory Turnover Rate (times)\":{\"0\":0.0001820926,\"1\":9360000000.0,\"2\":65000000.0,\"3\":7130000000.0},\" Fixed Assets Turnover Frequency\":{\"0\":0.0001165007,\"1\":719000000.0,\"2\":2650000000.0,\"3\":9150000000.0},\" Net Worth Turnover Rate (times)\":{\"0\":0.0329032258,\"1\":0.025483871,\"2\":0.0133870968,\"3\":0.0280645161},\" Revenue per person\":{\"0\":0.034164182,\"1\":0.0068886506,\"2\":0.0289969596,\"3\":0.0154634784},\" Operating profit per person\":{\"0\":0.3929128695,\"1\":0.3915899686,\"2\":0.3819678433,\"3\":0.3784966419},\" Allocation rate per person\":{\"0\":0.0371353016,\"1\":0.0123349721,\"2\":0.1410163119,\"3\":0.0213199897},\" Working Capital to Total Assets\":{\"0\":0.6727752925,\"1\":0.751110917,\"2\":0.8295019149,\"3\":0.7257541797},\" Quick Assets\\/Total Assets\":{\"0\":0.1666729588,\"1\":0.1272360023,\"2\":0.3402008785,\"3\":0.1615745316},\" Current Assets\\/Total Assets\":{\"0\":0.1906429591,\"1\":0.1824190541,\"2\":0.6028057017,\"3\":0.2258148689},\" Cash\\/Total Assets\":{\"0\":0.004094406,\"1\":0.014947727,\"2\":0.0009909445,\"3\":0.0188506248},\" Quick Assets\\/Current Liability\":{\"0\":0.0019967709,\"1\":0.0041360298,\"2\":0.0063024814,\"3\":0.0029612377},\" Cash\\/Current Liability\":{\"0\":0.000147336,\"1\":0.0013839101,\"2\":5340000000.0,\"3\":0.0010106464},\" Current Liability to Assets\":{\"0\":0.1473084504,\"1\":0.0569628274,\"2\":0.0981620645,\"3\":0.0987146304},\" Operating Funds to Liability\":{\"0\":0.3340151713,\"1\":0.341105992,\"2\":0.3367314947,\"3\":0.348716439},\" Inventory\\/Working Capital\":{\"0\":0.2769201582,\"1\":0.2896415764,\"2\":0.2774555281,\"3\":0.2765803042},\" Inventory\\/Current Liability\":{\"0\":0.00103599,\"1\":0.0052096824,\"2\":0.0138787858,\"3\":0.0035401479},\" Current Liabilities\\/Liability\":{\"0\":0.6762691762,\"1\":0.308588593,\"2\":0.4460274872,\"3\":0.6158483686},\" Working Capital\\/Equity\":{\"0\":0.7212745515,\"1\":0.7319752885,\"2\":0.7427286376,\"3\":0.7298249087},\" Current Liabilities\\/Equity\":{\"0\":0.3390770068,\"1\":0.3297401479,\"2\":0.3347768513,\"3\":0.3315089787},\" Long-term Liability to Current Assets\":{\"0\":0.025592368,\"1\":0.0239468187,\"2\":0.0037151157,\"3\":0.0221651997},\" Retained Earnings to Total Assets\":{\"0\":0.9032247712,\"1\":0.9310652176,\"2\":0.9099033625,\"3\":0.9069021588},\" Total income\\/Total expense\":{\"0\":0.002021613,\"1\":0.0022256083,\"2\":0.0020600706,\"3\":0.0018313586},\" Total expense\\/Assets\":{\"0\":0.0648557077,\"1\":0.02551586,\"2\":0.0213874282,\"3\":0.0241610702},\" Current Asset Turnover Rate\":{\"0\":701000000.0,\"1\":0.0001065198,\"2\":0.0017910937,\"3\":8140000000.0},\" Quick Asset Turnover Rate\":{\"0\":6550000000.0,\"1\":7700000000.0,\"2\":0.0010226765,\"3\":6050000000.0},\" Working capitcal Turnover Rate\":{\"0\":0.593830504,\"1\":0.5939155479,\"2\":0.5945018513,\"3\":0.5938887926},\" Cash Turnover Rate\":{\"0\":458000000.0,\"1\":2490000000.0,\"2\":761000000.0,\"3\":2030000000.0},\" Cash Flow to Sales\":{\"0\":0.6715676536,\"1\":0.6715699423,\"2\":0.6715713218,\"3\":0.6715191702},\" Fixed Assets to Assets\":{\"0\":0.4242057622,\"1\":0.4688281283,\"2\":0.2761792222,\"3\":0.5591439633},\" Current Liability to Liability\":{\"0\":0.6762691762,\"1\":0.308588593,\"2\":0.4460274872,\"3\":0.6158483686},\" Current Liability to Equity\":{\"0\":0.3390770068,\"1\":0.3297401479,\"2\":0.3347768513,\"3\":0.3315089787},\" Equity to Long-term Liability\":{\"0\":0.1265494878,\"1\":0.1209161058,\"2\":0.1179223194,\"3\":0.1207604553},\" Cash Flow to Total Assets\":{\"0\":0.6375553953,\"1\":0.6410999847,\"2\":0.6427645502,\"3\":0.5790393123},\" Cash Flow to Liability\":{\"0\":0.4586091477,\"1\":0.4590010533,\"2\":0.4592540355,\"3\":0.4485179116},\" CFO to Assets\":{\"0\":0.5203819179,\"1\":0.5671013087,\"2\":0.5384905396,\"3\":0.6041050562},\" Cash Flow to Equity\":{\"0\":0.3129049481,\"1\":0.3141631352,\"2\":0.3145154263,\"3\":0.3023822548},\" Current Liability to Current Assets\":{\"0\":0.1182504766,\"1\":0.0477752816,\"2\":0.0253464891,\"3\":0.0672496173},\" Liability-Assets Flag\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\" Net Income to Total Assets\":{\"0\":0.7168453432,\"1\":0.795297136,\"2\":0.774669697,\"3\":0.7395545252},\" Total assets to GNP price\":{\"0\":0.00921944,\"1\":0.0083233018,\"2\":0.0400028529,\"3\":0.0032524753},\" No-credit Interval\":{\"0\":0.6228789594,\"1\":0.6236517417,\"2\":0.6238410376,\"3\":0.6229287091},\" Gross Profit to Sales\":{\"0\":0.6014532901,\"1\":0.6102365259,\"2\":0.6014493405,\"3\":0.5835376122},\" Net Income to Stockholder's Equity\":{\"0\":0.827890214,\"1\":0.839969268,\"2\":0.8367743086,\"3\":0.8346971068},\" Liability to Equity\":{\"0\":0.2902018928,\"1\":0.2838459798,\"2\":0.2901885329,\"3\":0.281721193},\" Degree of Financial Leverage (DFL)\":{\"0\":0.0266006308,\"1\":0.2645768198,\"2\":0.0265547199,\"3\":0.0266966344},\" Interest Coverage Ratio (Interest expense to EBIT)\":{\"0\":0.5640501123,\"1\":0.5701749464,\"2\":0.5637060765,\"3\":0.5646634203},\" Net Income Flag\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1},\" Equity to Liability\":{\"0\":0.0164687409,\"1\":0.0207943063,\"2\":0.0164741143,\"3\":0.0239823322}}"}}]
| true | 1 |
<start_data_description><data_path>company-bankruptcy-prediction/data.csv:
<column_names>
['Bankrupt?', ' ROA(C) before interest and depreciation before interest', ' ROA(A) before interest and % after tax', ' ROA(B) before interest and depreciation after tax', ' Operating Gross Margin', ' Realized Sales Gross Margin', ' Operating Profit Rate', ' Pre-tax net Interest Rate', ' After-tax net Interest Rate', ' Non-industry income and expenditure/revenue', ' Continuous interest rate (after tax)', ' Operating Expense Rate', ' Research and development expense rate', ' Cash flow rate', ' Interest-bearing debt interest rate', ' Tax rate (A)', ' Net Value Per Share (B)', ' Net Value Per Share (A)', ' Net Value Per Share (C)', ' Persistent EPS in the Last Four Seasons', ' Cash Flow Per Share', ' Revenue Per Share (Yuan ¥)', ' Operating Profit Per Share (Yuan ¥)', ' Per Share Net profit before tax (Yuan ¥)', ' Realized Sales Gross Profit Growth Rate', ' Operating Profit Growth Rate', ' After-tax Net Profit Growth Rate', ' Regular Net Profit Growth Rate', ' Continuous Net Profit Growth Rate', ' Total Asset Growth Rate', ' Net Value Growth Rate', ' Total Asset Return Growth Rate Ratio', ' Cash Reinvestment %', ' Current Ratio', ' Quick Ratio', ' Interest Expense Ratio', ' Total debt/Total net worth', ' Debt ratio %', ' Net worth/Assets', ' Long-term fund suitability ratio (A)', ' Borrowing dependency', ' Contingent liabilities/Net worth', ' Operating profit/Paid-in capital', ' Net profit before tax/Paid-in capital', ' Inventory and accounts receivable/Net value', ' Total Asset Turnover', ' Accounts Receivable Turnover', ' Average Collection Days', ' Inventory Turnover Rate (times)', ' Fixed Assets Turnover Frequency', ' Net Worth Turnover Rate (times)', ' Revenue per person', ' Operating profit per person', ' Allocation rate per person', ' Working Capital to Total Assets', ' Quick Assets/Total Assets', ' Current Assets/Total Assets', ' Cash/Total Assets', ' Quick Assets/Current Liability', ' Cash/Current Liability', ' Current Liability to Assets', ' Operating Funds to Liability', ' Inventory/Working Capital', ' Inventory/Current Liability', ' Current Liabilities/Liability', ' Working Capital/Equity', ' Current Liabilities/Equity', ' Long-term Liability to Current Assets', ' Retained Earnings to Total Assets', ' Total income/Total expense', ' Total expense/Assets', ' Current Asset Turnover Rate', ' Quick Asset Turnover Rate', ' Working capitcal Turnover Rate', ' Cash Turnover Rate', ' Cash Flow to Sales', ' Fixed Assets to Assets', ' Current Liability to Liability', ' Current Liability to Equity', ' Equity to Long-term Liability', ' Cash Flow to Total Assets', ' Cash Flow to Liability', ' CFO to Assets', ' Cash Flow to Equity', ' Current Liability to Current Assets', ' Liability-Assets Flag', ' Net Income to Total Assets', ' Total assets to GNP price', ' No-credit Interval', ' Gross Profit to Sales', " Net Income to Stockholder's Equity", ' Liability to Equity', ' Degree of Financial Leverage (DFL)', ' Interest Coverage Ratio (Interest expense to EBIT)', ' Net Income Flag', ' Equity to Liability']
<column_types>
{'Bankrupt?': 'int64', ' ROA(C) before interest and depreciation before interest': 'float64', ' ROA(A) before interest and % after tax': 'float64', ' ROA(B) before interest and depreciation after tax': 'float64', ' Operating Gross Margin': 'float64', ' Realized Sales Gross Margin': 'float64', ' Operating Profit Rate': 'float64', ' Pre-tax net Interest Rate': 'float64', ' After-tax net Interest Rate': 'float64', ' Non-industry income and expenditure/revenue': 'float64', ' Continuous interest rate (after tax)': 'float64', ' Operating Expense Rate': 'float64', ' Research and development expense rate': 'float64', ' Cash flow rate': 'float64', ' Interest-bearing debt interest rate': 'float64', ' Tax rate (A)': 'float64', ' Net Value Per Share (B)': 'float64', ' Net Value Per Share (A)': 'float64', ' Net Value Per Share (C)': 'float64', ' Persistent EPS in the Last Four Seasons': 'float64', ' Cash Flow Per Share': 'float64', ' Revenue Per Share (Yuan ¥)': 'float64', ' Operating Profit Per Share (Yuan ¥)': 'float64', ' Per Share Net profit before tax (Yuan ¥)': 'float64', ' Realized Sales Gross Profit Growth Rate': 'float64', ' Operating Profit Growth Rate': 'float64', ' After-tax Net Profit Growth Rate': 'float64', ' Regular Net Profit Growth Rate': 'float64', ' Continuous Net Profit Growth Rate': 'float64', ' Total Asset Growth Rate': 'float64', ' Net Value Growth Rate': 'float64', ' Total Asset Return Growth Rate Ratio': 'float64', ' Cash Reinvestment %': 'float64', ' Current Ratio': 'float64', ' Quick Ratio': 'float64', ' Interest Expense Ratio': 'float64', ' Total debt/Total net worth': 'float64', ' Debt ratio %': 'float64', ' Net worth/Assets': 'float64', ' Long-term fund suitability ratio (A)': 'float64', ' Borrowing dependency': 'float64', ' Contingent liabilities/Net worth': 'float64', ' Operating profit/Paid-in capital': 'float64', ' Net profit before tax/Paid-in capital': 'float64', ' Inventory and accounts receivable/Net value': 'float64', ' Total Asset Turnover': 'float64', ' Accounts Receivable Turnover': 'float64', ' Average Collection Days': 'float64', ' Inventory Turnover Rate (times)': 'float64', ' Fixed Assets Turnover Frequency': 'float64', ' Net Worth Turnover Rate (times)': 'float64', ' Revenue per person': 'float64', ' Operating profit per person': 'float64', ' Allocation rate per person': 'float64', ' Working Capital to Total Assets': 'float64', ' Quick Assets/Total Assets': 'float64', ' Current Assets/Total Assets': 'float64', ' Cash/Total Assets': 'float64', ' Quick Assets/Current Liability': 'float64', ' Cash/Current Liability': 'float64', ' Current Liability to Assets': 'float64', ' Operating Funds to Liability': 'float64', ' Inventory/Working Capital': 'float64', ' Inventory/Current Liability': 'float64', ' Current Liabilities/Liability': 'float64', ' Working Capital/Equity': 'float64', ' Current Liabilities/Equity': 'float64', ' Long-term Liability to Current Assets': 'float64', ' Retained Earnings to Total Assets': 'float64', ' Total income/Total expense': 'float64', ' Total expense/Assets': 'float64', ' Current Asset Turnover Rate': 'float64', ' Quick Asset Turnover Rate': 'float64', ' Working capitcal Turnover Rate': 'float64', ' Cash Turnover Rate': 'float64', ' Cash Flow to Sales': 'float64', ' Fixed Assets to Assets': 'float64', ' Current Liability to Liability': 'float64', ' Current Liability to Equity': 'float64', ' Equity to Long-term Liability': 'float64', ' Cash Flow to Total Assets': 'float64', ' Cash Flow to Liability': 'float64', ' CFO to Assets': 'float64', ' Cash Flow to Equity': 'float64', ' Current Liability to Current Assets': 'float64', ' Liability-Assets Flag': 'int64', ' Net Income to Total Assets': 'float64', ' Total assets to GNP price': 'float64', ' No-credit Interval': 'float64', ' Gross Profit to Sales': 'float64', " Net Income to Stockholder's Equity": 'float64', ' Liability to Equity': 'float64', ' Degree of Financial Leverage (DFL)': 'float64', ' Interest Coverage Ratio (Interest expense to EBIT)': 'float64', ' Net Income Flag': 'int64', ' Equity to Liability': 'float64'}
<dataframe_Summary>
{'Bankrupt?': {'count': 6819.0, 'mean': 0.03226279513125092, 'std': 0.17671017660774022, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, ' ROA(C) before interest and depreciation before interest': {'count': 6819.0, 'mean': 0.5051796332417815, 'std': 0.06068563875428444, 'min': 0.0, '25%': 0.476527080388047, '50%': 0.502705601325988, '75%': 0.535562813825379, 'max': 1.0}, ' ROA(A) before interest and % after tax': {'count': 6819.0, 'mean': 0.5586249158750463, 'std': 0.06562003103170726, 'min': 0.0, '25%': 0.53554295682512, '50%': 0.559801569995639, '75%': 0.58915721761884, 'max': 1.0}, ' ROA(B) before interest and depreciation after tax': {'count': 6819.0, 'mean': 0.5535887093516657, 'std': 0.06159480929187566, 'min': 0.0, '25%': 0.527276620804112, '50%': 0.552277959205525, '75%': 0.584105144815033, 'max': 1.0}, ' Operating Gross Margin': {'count': 6819.0, 'mean': 0.6079480383703836, 'std': 0.01693381254822146, 'min': 0.0, '25%': 0.6004446590466855, '50%': 0.605997492036495, '75%': 0.613914152697502, 'max': 1.0}, ' Realized Sales Gross Margin': {'count': 6819.0, 'mean': 0.6079294691769791, 'std': 0.01691607005567578, 'min': 0.0, '25%': 0.600433848859165, '50%': 0.605975871661454, '75%': 0.6138420847806975, 'max': 1.0}, ' Operating Profit Rate': {'count': 6819.0, 'mean': 0.9987551277900442, 'std': 0.013010025092984125, 'min': 0.0, '25%': 0.998969203197885, '50%': 0.999022239374566, '75%': 0.999094514164357, 'max': 1.0}, ' Pre-tax net Interest Rate': {'count': 6819.0, 'mean': 0.7971897524712905, 'std': 0.012868988419884597, 'min': 0.0, '25%': 0.797385863236893, '50%': 0.797463610578231, '75%': 0.797578848185589, 'max': 1.0}, ' After-tax net Interest Rate': {'count': 6819.0, 'mean': 0.8090835935135348, 'std': 0.013600653945149043, 'min': 0.0, '25%': 0.809311597146491, '50%': 0.809375198550956, '75%': 0.809469266134837, 'max': 1.0}, ' Non-industry income and expenditure/revenue': {'count': 6819.0, 'mean': 0.303622923649734, 'std': 0.011163439838128548, 'min': 0.0, '25%': 0.30346627659685, '50%': 0.303525492830123, '75%': 0.303585192461218, 'max': 1.0}, ' Continuous interest rate (after tax)': {'count': 6819.0, 'mean': 0.7813814325261418, 'std': 0.012679004028913246, 'min': 0.0, '25%': 0.7815668165898519, '50%': 0.781634957112874, '75%': 0.7817353784192015, 'max': 1.0}, ' Operating Expense Rate': {'count': 6819.0, 'mean': 1995347312.8028853, 'std': 3237683890.5223837, 'min': 0.0, '25%': 0.0001566874492428, '50%': 0.0002777588583625, '75%': 4145000000.0, 'max': 9990000000.0}, ' Research and development expense rate': {'count': 6819.0, 'mean': 1950427306.0568295, 'std': 2598291553.9983206, 'min': 0.0, '25%': 0.000128187953762, '50%': 509000000.0, '75%': 3450000000.0, 'max': 9980000000.0}, ' Cash flow rate': {'count': 6819.0, 'mean': 0.4674311857796612, 'std': 0.017035517308785362, 'min': 0.0, '25%': 0.4615577531181065, '50%': 0.465079724549793, '75%': 0.471003917029432, 'max': 1.0}, ' Interest-bearing debt interest rate': {'count': 6819.0, 'mean': 16448012.905942537, 'std': 108275033.532824, 'min': 0.0, '25%': 0.0002030203020302, '50%': 0.0003210321032103, '75%': 0.0005325532553255, 'max': 990000000.0}, ' Tax rate (A)': {'count': 6819.0, 'mean': 0.11500074794142427, 'std': 0.13866749672835132, 'min': 0.0, '25%': 0.0, '50%': 0.0734892195566353, '75%': 0.205840672132807, 'max': 1.0}, ' Net Value Per Share (B)': {'count': 6819.0, 'mean': 0.19066057949747392, 'std': 0.033389768351330985, 'min': 0.0, '25%': 0.173612574269942, '50%': 0.184400151700308, '75%': 0.199570182461759, 'max': 1.0}, ' Net Value Per Share (A)': {'count': 6819.0, 'mean': 0.19063317896774643, 'std': 0.033473514172429, 'min': 0.0, '25%': 0.173612574269942, '50%': 0.184400151700308, '75%': 0.199570182461759, 'max': 1.0}, ' Net Value Per Share (C)': {'count': 6819.0, 'mean': 0.1906723702531618, 'std': 0.03348013767040907, 'min': 0.0, '25%': 0.1736757827314485, '50%': 0.184400151700308, '75%': 0.199612321436096, 'max': 1.0}, ' Persistent EPS in the Last Four Seasons': {'count': 6819.0, 'mean': 0.22881285256452782, 'std': 0.03326261307597689, 'min': 0.0, '25%': 0.214711165736976, '50%': 0.22454382149948, '75%': 0.2388200813085, 'max': 1.0}, ' Cash Flow Per Share': {'count': 6819.0, 'mean': 0.32348191216983224, 'std': 0.01761091295834377, 'min': 0.0, '25%': 0.317747754120393, '50%': 0.322487090613284, '75%': 0.3286234703260945, 'max': 1.0}, ' Revenue Per Share (Yuan ¥)': {'count': 6819.0, 'mean': 1328640.6020960682, 'std': 51707089.76790663, 'min': 0.0, '25%': 0.01563138073415305, '50%': 0.0273757127516373, '75%': 0.0463572152396509, 'max': 3020000000.0}, ' Operating Profit Per Share (Yuan ¥)': {'count': 6819.0, 'mean': 0.10909073887546925, 'std': 0.027942244774416095, 'min': 0.0, '25%': 0.0960833808321798, '50%': 0.104226040224737, '75%': 0.1161550362348345, 'max': 1.0}, ' Per Share Net profit before tax (Yuan ¥)': {'count': 6819.0, 'mean': 0.18436057764203345, 'std': 0.03318020898090537, 'min': 0.0, '25%': 0.170369812457634, '50%': 0.179709271672818, '75%': 0.193492505837162, 'max': 1.0}, ' Realized Sales Gross Profit Growth Rate': {'count': 6819.0, 'mean': 0.02240785447416587, 'std': 0.012079270152911608, 'min': 0.0, '25%': 0.022064532735505453, '50%': 0.0221023731764072, '75%': 0.022153148426612798, 'max': 1.0}, ' Operating Profit Growth Rate': {'count': 6819.0, 'mean': 0.8479799951688084, 'std': 0.010752477405401351, 'min': 0.0, '25%': 0.8479841081819834, '50%': 0.848043533745768, '75%': 0.8481225403945605, 'max': 1.0}, ' After-tax Net Profit Growth Rate': {'count': 6819.0, 'mean': 0.6891461185681318, 'std': 0.013853022260934765, 'min': 0.0, '25%': 0.6892699337448115, '50%': 0.689438526343149, '75%': 0.6896471679790515, 'max': 1.0}, ' Regular Net Profit Growth Rate': {'count': 6819.0, 'mean': 0.6891500117795616, 'std': 0.0139102834140106, 'min': 0.0, '25%': 0.689270265563206, '50%': 0.689438555196922, '75%': 0.6896470092832976, 'max': 1.0}, ' Continuous Net Profit Growth Rate': {'count': 6819.0, 'mean': 0.21763901299696697, 'std': 0.01006296314611611, 'min': 0.0, '25%': 0.2175795122117655, '50%': 0.217598046961963, '75%': 0.217621501194243, 'max': 1.0}, ' Total Asset Growth Rate': {'count': 6819.0, 'mean': 5508096595.248749, 'std': 2897717771.1697035, 'min': 0.0, '25%': 4860000000.0, '50%': 6400000000.0, '75%': 7390000000.0, 'max': 9990000000.0}, ' Net Value Growth Rate': {'count': 6819.0, 'mean': 1566212.055241067, 'std': 114159389.51834548, 'min': 0.0, '25%': 0.0004409688868264, '50%': 0.0004619555222076, '75%': 0.000499362141038, 'max': 9330000000.0}, ' Total Asset Return Growth Rate Ratio': {'count': 6819.0, 'mean': 0.2642475118758414, 'std': 0.009634208862611605, 'min': 0.0, '25%': 0.263758926420651, '50%': 0.264049545034229, '75%': 0.264388341065032, 'max': 1.0}, ' Cash Reinvestment %': {'count': 6819.0, 'mean': 0.37967667232266245, 'std': 0.020736565809616806, 'min': 0.0, '25%': 0.37474851905666695, '50%': 0.380425468499683, '75%': 0.386731120301032, 'max': 1.0}, ' Current Ratio': {'count': 6819.0, 'mean': 403284.954244977, 'std': 33302155.825480215, 'min': 0.0, '25%': 0.00755504663011965, '50%': 0.0105871744549939, '75%': 0.0162695280201934, 'max': 2750000000.0}, ' Quick Ratio': {'count': 6819.0, 'mean': 8376594.819684891, 'std': 244684748.44687235, 'min': 0.0, '25%': 0.004725903227376101, '50%': 0.0074124720675444, '75%': 0.01224910697241505, 'max': 9230000000.0}, ' Interest Expense Ratio': {'count': 6819.0, 'mean': 0.6309910117124214, 'std': 0.011238461504050156, 'min': 0.0, '25%': 0.63061225188696, '50%': 0.630698209613567, '75%': 0.631125258558102, 'max': 1.0}, ' Total debt/Total net worth': {'count': 6819.0, 'mean': 4416336.714259365, 'std': 168406905.28151134, 'min': 0.0, '25%': 0.0030070491250148, '50%': 0.005546284390702, '75%': 0.00927329266179695, 'max': 9940000000.0}, ' Debt ratio %': {'count': 6819.0, 'mean': 0.11317708497306007, 'std': 0.05392030606308283, 'min': 0.0, '25%': 0.0728905281615624, '50%': 0.111406717658796, '75%': 0.148804305106267, 'max': 1.0}, ' Net worth/Assets': {'count': 6819.0, 'mean': 0.8868229150269401, 'std': 0.05392030606308284, 'min': 0.0, '25%': 0.8511956948937329, '50%': 0.888593282341204, '75%': 0.927109471838438, 'max': 1.0}, ' Long-term fund suitability ratio (A)': {'count': 6819.0, 'mean': 0.00878273381503679, 'std': 0.028152926049290605, 'min': 0.0, '25%': 0.0052436836906082, '50%': 0.0056646361117639, '75%': 0.00684743246553585, 'max': 1.0}, ' Borrowing dependency': {'count': 6819.0, 'mean': 0.37465429459872324, 'std': 0.016286163355500444, 'min': 0.0, '25%': 0.3701678435547765, '50%': 0.372624322553083, '75%': 0.3762707372009225, 'max': 1.0}, ' Contingent liabilities/Net worth': {'count': 6819.0, 'mean': 0.0059682772664790325, 'std': 0.012188361875857312, 'min': 0.0, '25%': 0.0053658477137564, '50%': 0.0053658477137564, '75%': 0.00576435604952715, 'max': 1.0}, ' Operating profit/Paid-in capital': {'count': 6819.0, 'mean': 0.10897668140338518, 'std': 0.02778168598564047, 'min': 0.0, '25%': 0.0961046786197013, '50%': 0.104133079290635, '75%': 0.115927337274252, 'max': 1.0}, ' Net profit before tax/Paid-in capital': {'count': 6819.0, 'mean': 0.18271502907673604, 'std': 0.030784771508309793, 'min': 0.0, '25%': 0.169376366789835, '50%': 0.178455621747983, '75%': 0.191606967800317, 'max': 1.0}, ' Inventory and accounts receivable/Net value': {'count': 6819.0, 'mean': 0.40245933052066923, 'std': 0.013324079587932275, 'min': 0.0, '25%': 0.3974026791778925, '50%': 0.40013102490143, '75%': 0.404550770809581, 'max': 1.0}, ' Total Asset Turnover': {'count': 6819.0, 'mean': 0.14160561602172958, 'std': 0.1011449684929233, 'min': 0.0, '25%': 0.0764617691154423, '50%': 0.118440779610195, '75%': 0.176911544227886, 'max': 1.0}, ' Accounts Receivable Turnover': {'count': 6819.0, 'mean': 12789705.237553563, 'std': 278259836.9840667, 'min': 0.0, '25%': 0.0007101336065656, '50%': 0.0009678106580909, '75%': 0.0014547594168788, 'max': 9740000000.0}, ' Average Collection Days': {'count': 6819.0, 'mean': 9826220.861191595, 'std': 256358895.70533204, 'min': 0.0, '25%': 0.0043865304397204, '50%': 0.0065725374332349, '75%': 0.00897287558119175, 'max': 9730000000.0}, ' Inventory Turnover Rate (times)': {'count': 6819.0, 'mean': 2149106056.607619, 'std': 3247967014.047812, 'min': 0.0, '25%': 0.0001728255554827, '50%': 0.0007646742653862, '75%': 4620000000.0, 'max': 9990000000.0}, ' Fixed Assets Turnover Frequency': {'count': 6819.0, 'mean': 1008595981.8175156, 'std': 2477557316.9201517, 'min': 0.0, '25%': 0.0002330013064716, '50%': 0.000593094234655, '75%': 0.0036523711287173, 'max': 9990000000.0}, ' Net Worth Turnover Rate (times)': {'count': 6819.0, 'mean': 0.038595054614951586, 'std': 0.036680343560413615, 'min': 0.0, '25%': 0.0217741935483871, '50%': 0.0295161290322581, '75%': 0.0429032258064516, 'max': 1.0}, ' Revenue per person': {'count': 6819.0, 'mean': 2325854.266358276, 'std': 136632654.3899363, 'min': 0.0, '25%': 0.010432854016421151, '50%': 0.0186155134174464, '75%': 0.0358547655068079, 'max': 8810000000.0}, ' Operating profit per person': {'count': 6819.0, 'mean': 0.40067101508133507, 'std': 0.032720144194699534, 'min': 0.0, '25%': 0.392437981954275, '50%': 0.395897876574478, '75%': 0.40185093055335697, 'max': 1.0}, ' Allocation rate per person': {'count': 6819.0, 'mean': 11255785.321742088, 'std': 294506294.11677057, 'min': 0.0, '25%': 0.004120528997963601, '50%': 0.0078443733586557, '75%': 0.015020308976719, 'max': 9570000000.0}, ' Working Capital to Total Assets': {'count': 6819.0, 'mean': 0.814125170261333, 'std': 0.0590544026482635, 'min': 0.0, '25%': 0.774308962762401, '50%': 0.81027522898466, '75%': 0.8503828485419616, 'max': 1.0}, ' Quick Assets/Total Assets': {'count': 6819.0, 'mean': 0.4001318123650569, 'std': 0.20199806668068215, 'min': 0.0, '25%': 0.24197285659394002, '50%': 0.386450924981744, '75%': 0.540593673285078, 'max': 1.0}, ' Current Assets/Total Assets': {'count': 6819.0, 'mean': 0.5222734467680338, 'std': 0.21811182151419323, 'min': 0.0, '25%': 0.35284541721511353, '50%': 0.514829793890847, '75%': 0.6890506806831516, 'max': 1.0}, ' Cash/Total Assets': {'count': 6819.0, 'mean': 0.12409456048965214, 'std': 0.13925058358332645, 'min': 0.0, '25%': 0.03354322123979425, '50%': 0.0748874639354301, '75%': 0.1610731518633315, 'max': 1.0}, ' Quick Assets/Current Liability': {'count': 6819.0, 'mean': 3592902.1968296515, 'std': 171620908.60682163, 'min': 0.0, '25%': 0.00523977582664085, '50%': 0.0079088979804512, '75%': 0.0129509103075746, 'max': 8820000000.0}, ' Cash/Current Liability': {'count': 6819.0, 'mean': 37159994.147133335, 'std': 510350903.16273063, 'min': 0.0, '25%': 0.0019730075415488497, '50%': 0.0049038864700734, '75%': 0.0128055731079178, 'max': 9650000000.0}, ' Current Liability to Assets': {'count': 6819.0, 'mean': 0.0906727945676238, 'std': 0.05028985666891821, 'min': 0.0, '25%': 0.0533012764320206, '50%': 0.0827047949822228, '75%': 0.1195229934695275, 'max': 1.0}, ' Operating Funds to Liability': {'count': 6819.0, 'mean': 0.35382800412158655, 'std': 0.035147184179188065, 'min': 0.0, '25%': 0.34102297735578047, '50%': 0.348596657106137, '75%': 0.3609148870133705, 'max': 1.0}, ' Inventory/Working Capital': {'count': 6819.0, 'mean': 0.27739510610233165, 'std': 0.010468846972945228, 'min': 0.0, '25%': 0.2770339694810945, '50%': 0.277177699032242, '75%': 0.2774287054274715, 'max': 1.0}, ' Inventory/Current Liability': {'count': 6819.0, 'mean': 55806804.52577958, 'std': 582051554.6194191, 'min': 0.0, '25%': 0.0031631476746991002, '50%': 0.0064973353534734, '75%': 0.011146766748190151, 'max': 9910000000.0}, ' Current Liabilities/Liability': {'count': 6819.0, 'mean': 0.7615988775853332, 'std': 0.20667676768344223, 'min': 0.0, '25%': 0.6269807662218725, '50%': 0.806881404713333, '75%': 0.942026693700069, 'max': 1.0}, ' Working Capital/Equity': {'count': 6819.0, 'mean': 0.7358165257322183, 'std': 0.011678026475599061, 'min': 0.0, '25%': 0.733611818564342, '50%': 0.736012732265696, '75%': 0.738559910578823, 'max': 1.0}, ' Current Liabilities/Equity': {'count': 6819.0, 'mean': 0.33140980061698827, 'std': 0.013488027908897866, 'min': 0.0, '25%': 0.328095841686878, '50%': 0.329685133135929, '75%': 0.332322404809702, 'max': 1.0}, ' Long-term Liability to Current Assets': {'count': 6819.0, 'mean': 54160038.13589435, 'std': 570270621.9592104, 'min': 0.0, '25%': 0.0, '50%': 0.0019746187761809, '75%': 0.009005945944256601, 'max': 9540000000.0}, ' Retained Earnings to Total Assets': {'count': 6819.0, 'mean': 0.9347327541270043, 'std': 0.025564221690643103, 'min': 0.0, '25%': 0.9310965081459854, '50%': 0.937672322031461, '75%': 0.9448112860939986, 'max': 1.0}, ' Total income/Total expense': {'count': 6819.0, 'mean': 0.002548945567386416, 'std': 0.01209281469621801, 'min': 0.0, '25%': 0.0022355962096577498, '50%': 0.0023361709310448, '75%': 0.0024918511193838, 'max': 1.0}, ' Total expense/Assets': {'count': 6819.0, 'mean': 0.02918409925586063, 'std': 0.02714877679286165, 'min': 0.0, '25%': 0.01456705658927065, '50%': 0.0226739487842648, '75%': 0.035930137895265155, 'max': 1.0}, ' Current Asset Turnover Rate': {'count': 6819.0, 'mean': 1195855763.3089354, 'std': 2821161238.262308, 'min': 0.0, '25%': 0.00014562362973865, '50%': 0.0001987815566631, '75%': 0.0004525945407579, 'max': 10000000000.0}, ' Quick Asset Turnover Rate': {'count': 6819.0, 'mean': 2163735272.034426, 'std': 3374944402.166023, 'min': 0.0, '25%': 0.00014171486236355001, '50%': 0.0002247727878357, '75%': 4900000000.0, 'max': 10000000000.0}, ' Working capitcal Turnover Rate': {'count': 6819.0, 'mean': 0.5940062655659162, 'std': 0.008959384178922204, 'min': 0.0, '25%': 0.5939344215587965, '50%': 0.593962767104877, '75%': 0.5940023454696105, 'max': 1.0}, ' Cash Turnover Rate': {'count': 6819.0, 'mean': 2471976967.444318, 'std': 2938623226.6788445, 'min': 0.0, '25%': 0.0002735337396781, '50%': 1080000000.0, '75%': 4510000000.0, 'max': 10000000000.0}, ' Cash Flow to Sales': {'count': 6819.0, 'mean': 0.6715307810992098, 'std': 0.0093413456183006, 'min': 0.0, '25%': 0.671565259253275, '50%': 0.671573958092574, '75%': 0.671586580417158, 'max': 1.0}, ' Fixed Assets to Assets': {'count': 6819.0, 'mean': 1220120.5015895537, 'std': 100754158.71316808, 'min': 0.0, '25%': 0.0853603651897917, '50%': 0.196881048224411, '75%': 0.3721999782647555, 'max': 8320000000.0}, ' Current Liability to Liability': {'count': 6819.0, 'mean': 0.7615988775853332, 'std': 0.20667676768344223, 'min': 0.0, '25%': 0.6269807662218725, '50%': 0.806881404713333, '75%': 0.942026693700069, 'max': 1.0}, ' Current Liability to Equity': {'count': 6819.0, 'mean': 0.33140980061698827, 'std': 0.013488027908897866, 'min': 0.0, '25%': 0.328095841686878, '50%': 0.329685133135929, '75%': 0.332322404809702, 'max': 1.0}, ' Equity to Long-term Liability': {'count': 6819.0, 'mean': 0.11564465149636942, 'std': 0.019529176275314197, 'min': 0.0, '25%': 0.110933233663468, '50%': 0.112340004024972, '75%': 0.117106091075626, 'max': 1.0}, ' Cash Flow to Total Assets': {'count': 6819.0, 'mean': 0.6497305901792364, 'std': 0.04737213191450496, 'min': 0.0, '25%': 0.633265319013864, '50%': 0.645366460270721, '75%': 0.6630618534616091, 'max': 1.0}, ' Cash Flow to Liability': {'count': 6819.0, 'mean': 0.4618492532922571, 'std': 0.029942680345244794, 'min': 0.0, '25%': 0.4571164765642225, '50%': 0.459750137932885, '75%': 0.46423584697152853, 'max': 1.0}, ' CFO to Assets': {'count': 6819.0, 'mean': 0.5934150861096208, 'std': 0.05856055014224858, 'min': 0.0, '25%': 0.5659869401753586, '50%': 0.593266274083544, '75%': 0.6247688757833555, 'max': 1.0}, ' Cash Flow to Equity': {'count': 6819.0, 'mean': 0.3155823898995751, 'std': 0.01296089240164725, 'min': 0.0, '25%': 0.312994699600273, '50%': 0.314952752072916, '75%': 0.317707188742567, 'max': 1.0}, ' Current Liability to Current Assets': {'count': 6819.0, 'mean': 0.031506365747440736, 'std': 0.030844688453563848, 'min': 0.0, '25%': 0.018033665707965, '50%': 0.0275971428517009, '75%': 0.0383746158541899, 'max': 1.0}, ' Liability-Assets Flag': {'count': 6819.0, 'mean': 0.001173192550227306, 'std': 0.034234310865302146, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, ' Net Income to Total Assets': {'count': 6819.0, 'mean': 0.8077602200365486, 'std': 0.040332191531426226, 'min': 0.0, '25%': 0.7967498491931705, '50%': 0.810619042075101, '75%': 0.8264545295408715, 'max': 1.0}, ' Total assets to GNP price': {'count': 6819.0, 'mean': 18629417.81183602, 'std': 376450059.7458224, 'min': 0.0, '25%': 0.0009036204813306, '50%': 0.0020852127088157, '75%': 0.0052697768568805, 'max': 9820000000.0}, ' No-credit Interval': {'count': 6819.0, 'mean': 0.623914574767534, 'std': 0.012289548007412282, 'min': 0.0, '25%': 0.623636304973909, '50%': 0.623879225987712, '75%': 0.6241681927893561, 'max': 1.0}, ' Gross Profit to Sales': {'count': 6819.0, 'mean': 0.607946340270717, 'std': 0.016933807795673647, 'min': 0.0, '25%': 0.6004428952063054, '50%': 0.605998288167218, '75%': 0.613913271038147, 'max': 1.0}, " Net Income to Stockholder's Equity": {'count': 6819.0, 'mean': 0.8404020646301005, 'std': 0.01452252608252491, 'min': 0.0, '25%': 0.8401148040637195, '50%': 0.841178760250192, '75%': 0.8423569700412374, 'max': 1.0}, ' Liability to Equity': {'count': 6819.0, 'mean': 0.2803651538333931, 'std': 0.014463223575594045, 'min': 0.0, '25%': 0.276944242646329, '50%': 0.278777583629637, '75%': 0.2814491856088265, 'max': 1.0}, ' Degree of Financial Leverage (DFL)': {'count': 6819.0, 'mean': 0.027541119421203627, 'std': 0.01566794186642967, 'min': 0.0, '25%': 0.0267911566924924, '50%': 0.0268081258982465, '75%': 0.026913184214613348, 'max': 1.0}, ' Interest Coverage Ratio (Interest expense to EBIT)': {'count': 6819.0, 'mean': 0.5653579335465574, 'std': 0.013214239761961918, 'min': 0.0, '25%': 0.565158395757604, '50%': 0.565251928758969, '75%': 0.565724709506105, 'max': 1.0}, ' Net Income Flag': {'count': 6819.0, 'mean': 1.0, 'std': 0.0, 'min': 1.0, '25%': 1.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}, ' Equity to Liability': {'count': 6819.0, 'mean': 0.047578356529497656, 'std': 0.05001371618013796, 'min': 0.0, '25%': 0.024476693570910098, '50%': 0.0337976972031022, '75%': 0.052837817459331596, 'max': 1.0}}
<dataframe_info>
RangeIndex: 6819 entries, 0 to 6818
Data columns (total 96 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Bankrupt? 6819 non-null int64
1 ROA(C) before interest and depreciation before interest 6819 non-null float64
2 ROA(A) before interest and % after tax 6819 non-null float64
3 ROA(B) before interest and depreciation after tax 6819 non-null float64
4 Operating Gross Margin 6819 non-null float64
5 Realized Sales Gross Margin 6819 non-null float64
6 Operating Profit Rate 6819 non-null float64
7 Pre-tax net Interest Rate 6819 non-null float64
8 After-tax net Interest Rate 6819 non-null float64
9 Non-industry income and expenditure/revenue 6819 non-null float64
10 Continuous interest rate (after tax) 6819 non-null float64
11 Operating Expense Rate 6819 non-null float64
12 Research and development expense rate 6819 non-null float64
13 Cash flow rate 6819 non-null float64
14 Interest-bearing debt interest rate 6819 non-null float64
15 Tax rate (A) 6819 non-null float64
16 Net Value Per Share (B) 6819 non-null float64
17 Net Value Per Share (A) 6819 non-null float64
18 Net Value Per Share (C) 6819 non-null float64
19 Persistent EPS in the Last Four Seasons 6819 non-null float64
20 Cash Flow Per Share 6819 non-null float64
21 Revenue Per Share (Yuan ¥) 6819 non-null float64
22 Operating Profit Per Share (Yuan ¥) 6819 non-null float64
23 Per Share Net profit before tax (Yuan ¥) 6819 non-null float64
24 Realized Sales Gross Profit Growth Rate 6819 non-null float64
25 Operating Profit Growth Rate 6819 non-null float64
26 After-tax Net Profit Growth Rate 6819 non-null float64
27 Regular Net Profit Growth Rate 6819 non-null float64
28 Continuous Net Profit Growth Rate 6819 non-null float64
29 Total Asset Growth Rate 6819 non-null float64
30 Net Value Growth Rate 6819 non-null float64
31 Total Asset Return Growth Rate Ratio 6819 non-null float64
32 Cash Reinvestment % 6819 non-null float64
33 Current Ratio 6819 non-null float64
34 Quick Ratio 6819 non-null float64
35 Interest Expense Ratio 6819 non-null float64
36 Total debt/Total net worth 6819 non-null float64
37 Debt ratio % 6819 non-null float64
38 Net worth/Assets 6819 non-null float64
39 Long-term fund suitability ratio (A) 6819 non-null float64
40 Borrowing dependency 6819 non-null float64
41 Contingent liabilities/Net worth 6819 non-null float64
42 Operating profit/Paid-in capital 6819 non-null float64
43 Net profit before tax/Paid-in capital 6819 non-null float64
44 Inventory and accounts receivable/Net value 6819 non-null float64
45 Total Asset Turnover 6819 non-null float64
46 Accounts Receivable Turnover 6819 non-null float64
47 Average Collection Days 6819 non-null float64
48 Inventory Turnover Rate (times) 6819 non-null float64
49 Fixed Assets Turnover Frequency 6819 non-null float64
50 Net Worth Turnover Rate (times) 6819 non-null float64
51 Revenue per person 6819 non-null float64
52 Operating profit per person 6819 non-null float64
53 Allocation rate per person 6819 non-null float64
54 Working Capital to Total Assets 6819 non-null float64
55 Quick Assets/Total Assets 6819 non-null float64
56 Current Assets/Total Assets 6819 non-null float64
57 Cash/Total Assets 6819 non-null float64
58 Quick Assets/Current Liability 6819 non-null float64
59 Cash/Current Liability 6819 non-null float64
60 Current Liability to Assets 6819 non-null float64
61 Operating Funds to Liability 6819 non-null float64
62 Inventory/Working Capital 6819 non-null float64
63 Inventory/Current Liability 6819 non-null float64
64 Current Liabilities/Liability 6819 non-null float64
65 Working Capital/Equity 6819 non-null float64
66 Current Liabilities/Equity 6819 non-null float64
67 Long-term Liability to Current Assets 6819 non-null float64
68 Retained Earnings to Total Assets 6819 non-null float64
69 Total income/Total expense 6819 non-null float64
70 Total expense/Assets 6819 non-null float64
71 Current Asset Turnover Rate 6819 non-null float64
72 Quick Asset Turnover Rate 6819 non-null float64
73 Working capitcal Turnover Rate 6819 non-null float64
74 Cash Turnover Rate 6819 non-null float64
75 Cash Flow to Sales 6819 non-null float64
76 Fixed Assets to Assets 6819 non-null float64
77 Current Liability to Liability 6819 non-null float64
78 Current Liability to Equity 6819 non-null float64
79 Equity to Long-term Liability 6819 non-null float64
80 Cash Flow to Total Assets 6819 non-null float64
81 Cash Flow to Liability 6819 non-null float64
82 CFO to Assets 6819 non-null float64
83 Cash Flow to Equity 6819 non-null float64
84 Current Liability to Current Assets 6819 non-null float64
85 Liability-Assets Flag 6819 non-null int64
86 Net Income to Total Assets 6819 non-null float64
87 Total assets to GNP price 6819 non-null float64
88 No-credit Interval 6819 non-null float64
89 Gross Profit to Sales 6819 non-null float64
90 Net Income to Stockholder's Equity 6819 non-null float64
91 Liability to Equity 6819 non-null float64
92 Degree of Financial Leverage (DFL) 6819 non-null float64
93 Interest Coverage Ratio (Interest expense to EBIT) 6819 non-null float64
94 Net Income Flag 6819 non-null int64
95 Equity to Liability 6819 non-null float64
dtypes: float64(93), int64(3)
memory usage: 5.0 MB
<some_examples>
{'Bankrupt?': {'0': 1, '1': 1, '2': 1, '3': 1}, ' ROA(C) before interest and depreciation before interest': {'0': 0.3705942573, '1': 0.4642909375, '2': 0.4260712719, '3': 0.3998440014}, ' ROA(A) before interest and % after tax': {'0': 0.4243894461, '1': 0.53821413, '2': 0.4990187527, '3': 0.4512647187}, ' ROA(B) before interest and depreciation after tax': {'0': 0.4057497725, '1': 0.5167300177, '2': 0.4722950907, '3': 0.4577332834}, ' Operating Gross Margin': {'0': 0.6014572133, '1': 0.6102350855, '2': 0.6014500065, '3': 0.5835411292}, ' Realized Sales Gross Margin': {'0': 0.6014572133, '1': 0.6102350855, '2': 0.601363525, '3': 0.5835411292}, ' Operating Profit Rate': {'0': 0.9989692032, '1': 0.9989459782, '2': 0.9988573535, '3': 0.9986997471}, ' Pre-tax net Interest Rate': {'0': 0.7968871459, '1': 0.7973801913, '2': 0.7964033693, '3': 0.7969669683}, ' After-tax net Interest Rate': {'0': 0.8088093609, '1': 0.8093007257, '2': 0.8083875215, '3': 0.8089655977}, ' Non-industry income and expenditure/revenue': {'0': 0.3026464339, '1': 0.3035564303, '2': 0.3020351773, '3': 0.303349536}, ' Continuous interest rate (after tax)': {'0': 0.7809848502, '1': 0.7815059743, '2': 0.7802839362, '3': 0.7812409912}, ' Operating Expense Rate': {'0': 0.0001256969, '1': 0.0002897851, '2': 0.0002361297, '3': 0.0001078888}, ' Research and development expense rate': {'0': 0.0, '1': 0.0, '2': 25500000.0, '3': 0.0}, ' Cash flow rate': {'0': 0.4581431435, '1': 0.4618672572, '2': 0.4585205875, '3': 0.4657054427}, ' Interest-bearing debt interest rate': {'0': 0.0007250725, '1': 0.0006470647, '2': 0.000790079, '3': 0.0004490449}, ' Tax rate (A)': {'0': 0.0, '1': 0.0, '2': 0.0, '3': 0.0}, ' Net Value Per Share (B)': {'0': 0.1479499389, '1': 0.182251064, '2': 0.1779107497, '3': 0.1541865071}, ' Net Value Per Share (A)': {'0': 0.1479499389, '1': 0.182251064, '2': 0.1779107497, '3': 0.1541865071}, ' Net Value Per Share (C)': {'0': 0.1479499389, '1': 0.182251064, '2': 0.193712865, '3': 0.1541865071}, ' Persistent EPS in the Last Four Seasons': {'0': 0.1691405881, '1': 0.208943935, '2': 0.1805805049, '3': 0.1937222275}, ' Cash Flow Per Share': {'0': 0.3116644267, '1': 0.3181368041, '2': 0.3071019311, '3': 0.3216736224}, ' Revenue Per Share (Yuan ¥)': {'0': 0.0175597804, '1': 0.021144335, '2': 0.0059440083, '3': 0.014368468}, ' Operating Profit Per Share (Yuan ¥)': {'0': 0.0959205276, '1': 0.0937220096, '2': 0.0923377575, '3': 0.0777623972}, ' Per Share Net profit before tax (Yuan ¥)': {'0': 0.1387361603, '1': 0.1699179031, '2': 0.1428033441, '3': 0.148602847}, ' Realized Sales Gross Profit Growth Rate': {'0': 0.0221022784, '1': 0.0220801699, '2': 0.0227600968, '3': 0.0220460669}, ' Operating Profit Growth Rate': {'0': 0.8481949945, '1': 0.8480878838, '2': 0.8480940128, '3': 0.8480054774}, ' After-tax Net Profit Growth Rate': {'0': 0.6889794628, '1': 0.6896929012, '2': 0.689462677, '3': 0.6891095356}, ' Regular Net Profit Growth Rate': {'0': 0.6889794628, '1': 0.6897017016, '2': 0.6894696596, '3': 0.6891095356}, ' Continuous Net Profit Growth Rate': {'0': 0.2175353862, '1': 0.2176195965, '2': 0.217601299, '3': 0.2175681883}, ' Total Asset Growth Rate': {'0': 4980000000.0, '1': 6110000000.0, '2': 7280000000.0, '3': 4880000000.0}, ' Net Value Growth Rate': {'0': 0.0003269773, '1': 0.0004430401, '2': 0.0003964253, '3': 0.0003824259}, ' Total Asset Return Growth Rate Ratio': {'0': 0.2630999837, '1': 0.2645157781, '2': 0.2641839756, '3': 0.2633711759}, ' Cash Reinvestment %': {'0': 0.363725271, '1': 0.376709139, '2': 0.3689132298, '3': 0.3840765992}, ' Current Ratio': {'0': 0.0022589633, '1': 0.0060162059, '2': 0.0115425537, '3': 0.0041940587}, ' Quick Ratio': {'0': 0.0012077551, '1': 0.0040393668, '2': 0.0053475602, '3': 0.0028964911}, ' Interest Expense Ratio': {'0': 0.629951302, '1': 0.6351724634, '2': 0.6296314434, '3': 0.630228353}, ' Total debt/Total net worth': {'0': 0.0212659244, '1': 0.0125023938, '2': 0.021247686, '3': 0.0095724017}, ' Debt ratio %': {'0': 0.2075762615, '1': 0.1711763461, '2': 0.2075157965, '3': 0.151464764}, ' Net worth/Assets': {'0': 0.7924237385, '1': 0.8288236539, '2': 0.7924842035, '3': 0.848535236}, ' Long-term fund suitability ratio (A)': {'0': 0.0050244547, '1': 0.0050588818, '2': 0.0050998994, '3': 0.0050469241}, ' Borrowing dependency': {'0': 0.3902843544, '1': 0.37676002, '2': 0.3790929201, '3': 0.3797426876}, ' Contingent liabilities/Net worth': {'0': 0.0064785025, '1': 0.0058350395, '2': 0.0065619821, '3': 0.0053658477}, ' Operating profit/Paid-in capital': {'0': 0.095884834, '1': 0.0937433843, '2': 0.0923184653, '3': 0.0777272949}, ' Net profit before tax/Paid-in capital': {'0': 0.1377573335, '1': 0.1689616168, '2': 0.1480355931, '3': 0.1475605158}, ' Inventory and accounts receivable/Net value': {'0': 0.3980356983, '1': 0.3977248825, '2': 0.406580451, '3': 0.3979245013}, ' Total Asset Turnover': {'0': 0.0869565217, '1': 0.0644677661, '2': 0.0149925037, '3': 0.0899550225}, ' Accounts Receivable Turnover': {'0': 0.0018138841, '1': 0.0012863563, '2': 0.0014953385, '3': 0.0019660556}, ' Average Collection Days': {'0': 0.0034873643, '1': 0.0049168079, '2': 0.0042268495, '3': 0.0032149673}, ' Inventory Turnover Rate (times)': {'0': 0.0001820926, '1': 9360000000.0, '2': 65000000.0, '3': 7130000000.0}, ' Fixed Assets Turnover Frequency': {'0': 0.0001165007, '1': 719000000.0, '2': 2650000000.0, '3': 9150000000.0}, ' Net Worth Turnover Rate (times)': {'0': 0.0329032258, '1': 0.025483871, '2': 0.0133870968, '3': 0.0280645161}, ' Revenue per person': {'0': 0.034164182, '1': 0.0068886506, '2': 0.0289969596, '3': 0.0154634784}, ' Operating profit per person': {'0': 0.3929128695, '1': 0.3915899686, '2': 0.3819678433, '3': 0.3784966419}, ' Allocation rate per person': {'0': 0.0371353016, '1': 0.0123349721, '2': 0.1410163119, '3': 0.0213199897}, ' Working Capital to Total Assets': {'0': 0.6727752925, '1': 0.751110917, '2': 0.8295019149, '3': 0.7257541797}, ' Quick Assets/Total Assets': {'0': 0.1666729588, '1': 0.1272360023, '2': 0.3402008785, '3': 0.1615745316}, ' Current Assets/Total Assets': {'0': 0.1906429591, '1': 0.1824190541, '2': 0.6028057017, '3': 0.2258148689}, ' Cash/Total Assets': {'0': 0.004094406, '1': 0.014947727, '2': 0.0009909445, '3': 0.0188506248}, ' Quick Assets/Current Liability': {'0': 0.0019967709, '1': 0.0041360298, '2': 0.0063024814, '3': 0.0029612377}, ' Cash/Current Liability': {'0': 0.000147336, '1': 0.0013839101, '2': 5340000000.0, '3': 0.0010106464}, ' Current Liability to Assets': {'0': 0.1473084504, '1': 0.0569628274, '2': 0.0981620645, '3': 0.0987146304}, ' Operating Funds to Liability': {'0': 0.3340151713, '1': 0.341105992, '2': 0.3367314947, '3': 0.348716439}, ' Inventory/Working Capital': {'0': 0.2769201582, '1': 0.2896415764, '2': 0.2774555281, '3': 0.2765803042}, ' Inventory/Current Liability': {'0': 0.00103599, '1': 0.0052096824, '2': 0.0138787858, '3': 0.0035401479}, ' Current Liabilities/Liability': {'0': 0.6762691762, '1': 0.308588593, '2': 0.4460274872, '3': 0.6158483686}, ' Working Capital/Equity': {'0': 0.7212745515, '1': 0.7319752885, '2': 0.7427286376, '3': 0.7298249087}, ' Current Liabilities/Equity': {'0': 0.3390770068, '1': 0.3297401479, '2': 0.3347768513, '3': 0.3315089787}, ' Long-term Liability to Current Assets': {'0': 0.025592368, '1': 0.0239468187, '2': 0.0037151157, '3': 0.0221651997}, ' Retained Earnings to Total Assets': {'0': 0.9032247712, '1': 0.9310652176, '2': 0.9099033625, '3': 0.9069021588}, ' Total income/Total expense': {'0': 0.002021613, '1': 0.0022256083, '2': 0.0020600706, '3': 0.0018313586}, ' Total expense/Assets': {'0': 0.0648557077, '1': 0.02551586, '2': 0.0213874282, '3': 0.0241610702}, ' Current Asset Turnover Rate': {'0': 701000000.0, '1': 0.0001065198, '2': 0.0017910937, '3': 8140000000.0}, ' Quick Asset Turnover Rate': {'0': 6550000000.0, '1': 7700000000.0, '2': 0.0010226765, '3': 6050000000.0}, ' Working capitcal Turnover Rate': {'0': 0.593830504, '1': 0.5939155479, '2': 0.5945018513, '3': 0.5938887926}, ' Cash Turnover Rate': {'0': 458000000.0, '1': 2490000000.0, '2': 761000000.0, '3': 2030000000.0}, ' Cash Flow to Sales': {'0': 0.6715676536, '1': 0.6715699423, '2': 0.6715713218, '3': 0.6715191702}, ' Fixed Assets to Assets': {'0': 0.4242057622, '1': 0.4688281283, '2': 0.2761792222, '3': 0.5591439633}, ' Current Liability to Liability': {'0': 0.6762691762, '1': 0.308588593, '2': 0.4460274872, '3': 0.6158483686}, ' Current Liability to Equity': {'0': 0.3390770068, '1': 0.3297401479, '2': 0.3347768513, '3': 0.3315089787}, ' Equity to Long-term Liability': {'0': 0.1265494878, '1': 0.1209161058, '2': 0.1179223194, '3': 0.1207604553}, ' Cash Flow to Total Assets': {'0': 0.6375553953, '1': 0.6410999847, '2': 0.6427645502, '3': 0.5790393123}, ' Cash Flow to Liability': {'0': 0.4586091477, '1': 0.4590010533, '2': 0.4592540355, '3': 0.4485179116}, ' CFO to Assets': {'0': 0.5203819179, '1': 0.5671013087, '2': 0.5384905396, '3': 0.6041050562}, ' Cash Flow to Equity': {'0': 0.3129049481, '1': 0.3141631352, '2': 0.3145154263, '3': 0.3023822548}, ' Current Liability to Current Assets': {'0': 0.1182504766, '1': 0.0477752816, '2': 0.0253464891, '3': 0.0672496173}, ' Liability-Assets Flag': {'0': 0, '1': 0, '2': 0, '3': 0}, ' Net Income to Total Assets': {'0': 0.7168453432, '1': 0.795297136, '2': 0.774669697, '3': 0.7395545252}, ' Total assets to GNP price': {'0': 0.00921944, '1': 0.0083233018, '2': 0.0400028529, '3': 0.0032524753}, ' No-credit Interval': {'0': 0.6228789594, '1': 0.6236517417, '2': 0.6238410376, '3': 0.6229287091}, ' Gross Profit to Sales': {'0': 0.6014532901, '1': 0.6102365259, '2': 0.6014493405, '3': 0.5835376122}, " Net Income to Stockholder's Equity": {'0': 0.827890214, '1': 0.839969268, '2': 0.8367743086, '3': 0.8346971068}, ' Liability to Equity': {'0': 0.2902018928, '1': 0.2838459798, '2': 0.2901885329, '3': 0.281721193}, ' Degree of Financial Leverage (DFL)': {'0': 0.0266006308, '1': 0.2645768198, '2': 0.0265547199, '3': 0.0266966344}, ' Interest Coverage Ratio (Interest expense to EBIT)': {'0': 0.5640501123, '1': 0.5701749464, '2': 0.5637060765, '3': 0.5646634203}, ' Net Income Flag': {'0': 1, '1': 1, '2': 1, '3': 1}, ' Equity to Liability': {'0': 0.0164687409, '1': 0.0207943063, '2': 0.0164741143, '3': 0.0239823322}}
<end_description>
| 3,051 | 0 | 9,298 | 3,051 |
129529932
|
import pandas as pd
import gc
gc.collect()
data = pd.read_csv("/kaggle/input/predict-student-performance-from-game-play/train.csv")
def verify(data, col1, col2):
i = 0
for index, row in data.iterrows():
if row[col1] != "checkpoint" and pd.isnull(row[col2]):
i += 1
return i
verify(data2, "event_name", "text")
gc.collect()
data.head()
def change_data(data, col_to_change, col_to_del, save_path):
col_not_found = [col for col in col_to_del if col not in data.columns]
if col_not_found:
print("col not found")
return
data.drop(col_to_del, axis=1, inplace=True)
data[col_to_change] = data[col_to_change] / 1000.0
data.to_pickle(save_path)
change_data(
data,
"elapsed_time",
["hover_duration", "fullscreen", "hq", "music"],
"cleaneddata.pickle",
)
del data
gc.collect()
data = pd.read_pickle("/kaggle/working/cleaneddata.pickle")
gc.collect()
data.info(memory_usage="deep")
gc.collect()
data["session_id"] = data["session_id"].astype("int32")
data["index"] = data["index"].astype("int32")
data["elapsed_time"] = data["elapsed_time"].astype("float32")
data["event_name"] = data["event_name"].astype("category")
data["name"] = data["name"].astype("category")
data["level"] = data["level"].astype("int32")
data["page"] = data["page"].astype("float32")
data["room_coor_x"] = data["room_coor_x"].astype("float32")
data["room_coor_y"] = data["room_coor_y"].astype("float32")
data["screen_coor_x"] = data["screen_coor_x"].astype("float32")
data["screen_coor_y"] = data["screen_coor_y"].astype("float32")
data["text"] = data["text"].astype("category")
data["fqid"] = data["fqid"].astype("category")
data["room_fqid"] = data["room_fqid"].astype("category")
data["text_fqid"] = data["text_fqid"].astype("category")
data["level_group"] = data["level_group"].astype("category")
data.to_pickle("output_pickle_file.pickle")
data2 = pd.read_pickle("/kaggle/working/output_pickle_file.pickle")
del data
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/529/129529932.ipynb
| null | null |
[{"Id": 129529932, "ScriptId": 38504870, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6780400, "CreationDate": "05/14/2023 15:10:57", "VersionNumber": 1.0, "Title": "tabular-data processing", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 66.0, "LinesInsertedFromPrevious": 66.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import gc
gc.collect()
data = pd.read_csv("/kaggle/input/predict-student-performance-from-game-play/train.csv")
def verify(data, col1, col2):
i = 0
for index, row in data.iterrows():
if row[col1] != "checkpoint" and pd.isnull(row[col2]):
i += 1
return i
verify(data2, "event_name", "text")
gc.collect()
data.head()
def change_data(data, col_to_change, col_to_del, save_path):
col_not_found = [col for col in col_to_del if col not in data.columns]
if col_not_found:
print("col not found")
return
data.drop(col_to_del, axis=1, inplace=True)
data[col_to_change] = data[col_to_change] / 1000.0
data.to_pickle(save_path)
change_data(
data,
"elapsed_time",
["hover_duration", "fullscreen", "hq", "music"],
"cleaneddata.pickle",
)
del data
gc.collect()
data = pd.read_pickle("/kaggle/working/cleaneddata.pickle")
gc.collect()
data.info(memory_usage="deep")
gc.collect()
data["session_id"] = data["session_id"].astype("int32")
data["index"] = data["index"].astype("int32")
data["elapsed_time"] = data["elapsed_time"].astype("float32")
data["event_name"] = data["event_name"].astype("category")
data["name"] = data["name"].astype("category")
data["level"] = data["level"].astype("int32")
data["page"] = data["page"].astype("float32")
data["room_coor_x"] = data["room_coor_x"].astype("float32")
data["room_coor_y"] = data["room_coor_y"].astype("float32")
data["screen_coor_x"] = data["screen_coor_x"].astype("float32")
data["screen_coor_y"] = data["screen_coor_y"].astype("float32")
data["text"] = data["text"].astype("category")
data["fqid"] = data["fqid"].astype("category")
data["room_fqid"] = data["room_fqid"].astype("category")
data["text_fqid"] = data["text_fqid"].astype("category")
data["level_group"] = data["level_group"].astype("category")
data.to_pickle("output_pickle_file.pickle")
data2 = pd.read_pickle("/kaggle/working/output_pickle_file.pickle")
del data
| false | 0 | 690 | 0 | 690 | 690 |
||
129973875
|
<jupyter_start><jupyter_text>List of Countries by number of Internet Users
### Context
This data set consists of the list of countries by the number of internet users in 2018. The data was collected from Wikipedia, and then I just created a Comma Separated Value format with the help of Microsoft Excel.
### Content
This is a trivial data set which has 6 columns and 215 rows containing the country name, population, population by rank, internet users, internet users rank and percentage.
### Inspiration
I want to find what are the important features that contribute to the rank of countries based on internet users. Also, I want to create a model that can predict the rank given the population and internet users.
Kaggle dataset identifier: list-of-countries-by-number-of-internet-users
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import seaborn as sns
df = pd.read_csv(
"/kaggle/input/list-of-countries-by-number-of-internet-users/List of Countries by number of Internet Users - Sheet1.csv"
)
df
countplot = sns.countplot(data=df, x="Internet Users")
df["Population"] = df["Population"].replace(",", "").astype(int)
Pop = df[df["Population"] < 10000]
by_Country = Pop.groupby("Country or Area", as_index=False)
mean_by_Country = by_Country["Population"].mean()
mean_by_Country = mean_by_Country.head()
barplot = sns.barplot(x="Country or Area", y="Population", data=mean_by_Country)
# ### as the figure shows Ascension Island has a least population in the world
df["Population"] = df["Population"].replace(",", "").astype(int)
by_Country = df.groupby("Country or Area", as_index=False)["Population"].mean().head()
replot = sns.relplot(x="Country or Area", y="Population", data=by_Country)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/973/129973875.ipynb
|
list-of-countries-by-number-of-internet-users
|
tanuprabhu
|
[{"Id": 129973875, "ScriptId": 38660564, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14989378, "CreationDate": "05/17/2023 20:53:38", "VersionNumber": 1.0, "Title": "Data_Visualization2", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 39.0, "LinesInsertedFromPrevious": 39.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 186414554, "KernelVersionId": 129973875, "SourceDatasetVersionId": 605527}]
|
[{"Id": 605527, "DatasetId": 295424, "DatasourceVersionId": 622477, "CreatorUserId": 3200273, "LicenseName": "Data files \u00a9 Original Authors", "CreationDate": "08/10/2019 03:14:32", "VersionNumber": 1.0, "Title": "List of Countries by number of Internet Users", "Slug": "list-of-countries-by-number-of-internet-users", "Subtitle": "Countries with most internet usage around the world.", "Description": "### Context\n\nThis data set consists of the list of countries by the number of internet users in 2018. The data was collected from Wikipedia, and then I just created a Comma Separated Value format with the help of Microsoft Excel.\n\n\n### Content\n\nThis is a trivial data set which has 6 columns and 215 rows containing the country name, population, population by rank, internet users, internet users rank and percentage.\n\n\n\n### Inspiration\n\nI want to find what are the important features that contribute to the rank of countries based on internet users. Also, I want to create a model that can predict the rank given the population and internet users.", "VersionNotes": "Initial release", "TotalCompressedBytes": 10447.0, "TotalUncompressedBytes": 10447.0}]
|
[{"Id": 295424, "CreatorUserId": 3200273, "OwnerUserId": 3200273.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 605527.0, "CurrentDatasourceVersionId": 622477.0, "ForumId": 306870, "Type": 2, "CreationDate": "08/10/2019 03:14:32", "LastActivityDate": "08/10/2019", "TotalViews": 26529, "TotalDownloads": 3611, "TotalVotes": 47, "TotalKernels": 5}]
|
[{"Id": 3200273, "UserName": "tanuprabhu", "DisplayName": "Tanu N Prabhu", "RegisterDate": "05/09/2019", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import seaborn as sns
df = pd.read_csv(
"/kaggle/input/list-of-countries-by-number-of-internet-users/List of Countries by number of Internet Users - Sheet1.csv"
)
df
countplot = sns.countplot(data=df, x="Internet Users")
df["Population"] = df["Population"].replace(",", "").astype(int)
Pop = df[df["Population"] < 10000]
by_Country = Pop.groupby("Country or Area", as_index=False)
mean_by_Country = by_Country["Population"].mean()
mean_by_Country = mean_by_Country.head()
barplot = sns.barplot(x="Country or Area", y="Population", data=mean_by_Country)
# ### as the figure shows Ascension Island has a least population in the world
df["Population"] = df["Population"].replace(",", "").astype(int)
by_Country = df.groupby("Country or Area", as_index=False)["Population"].mean().head()
replot = sns.relplot(x="Country or Area", y="Population", data=by_Country)
| false | 1 | 454 | 0 | 638 | 454 |
||
129973810
|
import pandas as pd
import xgboost as xgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# Import necessary libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.metrics import confusion_matrix, accuracy_score
sns.set_style("whitegrid")
plt.style.use("fivethirtyeight")
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from keras.models import Sequential
from keras.callbacks import EarlyStopping
from keras.layers import Dense, LSTM, GRU, Dropout
from sklearn.preprocessing import MinMaxScaler
# Load the sea level data into a Pandas DataFrame
sea_level_df = pd.read_csv("/kaggle/input/sea-level/sealevel.csv")
# Separate the features and the target variable
X = sea_level_df.drop(["GMSL_noGIA"], axis=1)
y = sea_level_df["GMSL_noGIA"]
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Train an XGBoost model to obtain feature importances
model = xgb.XGBRegressor()
model.fit(X_train, y_train)
# Plot the feature importances
xgb.plot_importance(model)
# Select the most important features
selected_features = X.columns[model.feature_importances_ > 0.05]
# Print the selected features
print("Selected Features:")
print(selected_features)
# Train a new model using only the selected features
X_train_selected = X_train[selected_features]
X_test_selected = X_test[selected_features]
model_selected = xgb.XGBRegressor()
model_selected.fit(X_train_selected, y_train)
# Evaluate the model on the testing set
y_pred = model_selected.predict(X_test_selected)
mse = mean_squared_error(y_test, y_pred)
print("Mean squared error:", mse)
print("Selected Features:")
for feature in selected_features:
print(feature)
# Linear Regression model
lr = LinearRegression()
lr.fit(X_train, y_train)
y_pred_lr = lr.predict(X_test)
mse_lr = mean_squared_error(y_test, y_pred_lr)
r2_lr = r2_score(y_test, y_pred_lr)
# Print results
print("Linear Regression Results:")
print("MSE:", mse_lr)
print("R2:", r2_lr)
plt.scatter(y_test, y_pred_lr)
plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], "k--", lw=3)
plt.xlabel("True Values")
plt.ylabel("Predictions")
plt.title("Linear Regression - Predicted vs. True Values")
plt.show()
# Random Forest Regression model
rfr = RandomForestRegressor(n_estimators=100, random_state=0)
rfr.fit(X_train, y_train)
y_pred_rfr = rfr.predict(X_test)
mse_rfr = mean_squared_error(y_test, y_pred_rfr)
r2_rfr = r2_score(y_test, y_pred_rfr)
print("\nRandom Forest Regression Results:")
print("MSE:", mse_rfr)
print("R2:", r2_rfr)
# Visualize Random Forest Regression predictions
plt.scatter(y_test, y_pred_rfr, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Random Forest Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
# K-Nearest Neighbors Regression model
knr = KNeighborsRegressor(n_neighbors=5)
knr.fit(X_train, y_train)
y_pred_knr = knr.predict(X_test)
mse_knr = mean_squared_error(y_test, y_pred_knr)
r2_knr = r2_score(y_test, y_pred_knr)
print("\nK-Nearest Neighbors Regression Results:")
print("MSE:", mse_knr)
print("R2:", r2_knr)
# Visualize KNN Regression predictions
plt.scatter(y_test, y_pred_knr, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("KNN Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
# Decision Tree Regression model
dtr = DecisionTreeRegressor(max_depth=10, random_state=0)
dtr.fit(X_train, y_train)
y_pred_dtr = dtr.predict(X_test)
mse_dtr = mean_squared_error(y_test, y_pred_dtr)
r2_dtr = r2_score(y_test, y_pred_dtr)
print("\nDecision Tree Regression Results:")
print("MSE:", mse_dtr)
print("R2:", r2_dtr)
# Visualize Decision Tree Regression predictions
plt.scatter(y_test, y_pred_dtr, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Decision Tree Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
from sklearn.linear_model import Lasso, Ridge
# Lasso Regression model
lasso = Lasso(alpha=0.1, max_iter=10000)
lasso.fit(X_train, y_train)
y_pred_lasso = lasso.predict(X_test)
mse_lasso = mean_squared_error(y_test, y_pred_lasso)
r2_lasso = r2_score(y_test, y_pred_lasso)
# Print results
print("Lasso Regression Results:")
print("MSE:", mse_lasso)
print("R2:", r2_lasso)
# Visualize Lasso Regression predictions
plt.scatter(y_test, y_pred_lasso, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Lasso Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
# Ridge Regression model
ridge = Ridge(alpha=0.1, max_iter=10000)
ridge.fit(X_train, y_train)
y_pred_ridge = ridge.predict(X_test)
mse_ridge = mean_squared_error(y_test, y_pred_ridge)
r2_ridge = r2_score(y_test, y_pred_ridge)
# Print results
print("\nRidge Regression Results:")
print("MSE:", mse_ridge)
print("R2:", r2_ridge)
# Visualize Ridge Regression predictions
plt.scatter(y_test, y_pred_ridge, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Ridge Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/973/129973810.ipynb
| null | null |
[{"Id": 129973810, "ScriptId": 38659635, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12460626, "CreationDate": "05/17/2023 20:52:57", "VersionNumber": 1.0, "Title": "Sea Level Prediction C.", "EvaluationDate": "05/17/2023", "IsChange": true, "TotalLines": 194.0, "LinesInsertedFromPrevious": 194.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import pandas as pd
import xgboost as xgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# Import necessary libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.metrics import confusion_matrix, accuracy_score
sns.set_style("whitegrid")
plt.style.use("fivethirtyeight")
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from keras.models import Sequential
from keras.callbacks import EarlyStopping
from keras.layers import Dense, LSTM, GRU, Dropout
from sklearn.preprocessing import MinMaxScaler
# Load the sea level data into a Pandas DataFrame
sea_level_df = pd.read_csv("/kaggle/input/sea-level/sealevel.csv")
# Separate the features and the target variable
X = sea_level_df.drop(["GMSL_noGIA"], axis=1)
y = sea_level_df["GMSL_noGIA"]
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Train an XGBoost model to obtain feature importances
model = xgb.XGBRegressor()
model.fit(X_train, y_train)
# Plot the feature importances
xgb.plot_importance(model)
# Select the most important features
selected_features = X.columns[model.feature_importances_ > 0.05]
# Print the selected features
print("Selected Features:")
print(selected_features)
# Train a new model using only the selected features
X_train_selected = X_train[selected_features]
X_test_selected = X_test[selected_features]
model_selected = xgb.XGBRegressor()
model_selected.fit(X_train_selected, y_train)
# Evaluate the model on the testing set
y_pred = model_selected.predict(X_test_selected)
mse = mean_squared_error(y_test, y_pred)
print("Mean squared error:", mse)
print("Selected Features:")
for feature in selected_features:
print(feature)
# Linear Regression model
lr = LinearRegression()
lr.fit(X_train, y_train)
y_pred_lr = lr.predict(X_test)
mse_lr = mean_squared_error(y_test, y_pred_lr)
r2_lr = r2_score(y_test, y_pred_lr)
# Print results
print("Linear Regression Results:")
print("MSE:", mse_lr)
print("R2:", r2_lr)
plt.scatter(y_test, y_pred_lr)
plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], "k--", lw=3)
plt.xlabel("True Values")
plt.ylabel("Predictions")
plt.title("Linear Regression - Predicted vs. True Values")
plt.show()
# Random Forest Regression model
rfr = RandomForestRegressor(n_estimators=100, random_state=0)
rfr.fit(X_train, y_train)
y_pred_rfr = rfr.predict(X_test)
mse_rfr = mean_squared_error(y_test, y_pred_rfr)
r2_rfr = r2_score(y_test, y_pred_rfr)
print("\nRandom Forest Regression Results:")
print("MSE:", mse_rfr)
print("R2:", r2_rfr)
# Visualize Random Forest Regression predictions
plt.scatter(y_test, y_pred_rfr, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Random Forest Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
# K-Nearest Neighbors Regression model
knr = KNeighborsRegressor(n_neighbors=5)
knr.fit(X_train, y_train)
y_pred_knr = knr.predict(X_test)
mse_knr = mean_squared_error(y_test, y_pred_knr)
r2_knr = r2_score(y_test, y_pred_knr)
print("\nK-Nearest Neighbors Regression Results:")
print("MSE:", mse_knr)
print("R2:", r2_knr)
# Visualize KNN Regression predictions
plt.scatter(y_test, y_pred_knr, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("KNN Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
# Decision Tree Regression model
dtr = DecisionTreeRegressor(max_depth=10, random_state=0)
dtr.fit(X_train, y_train)
y_pred_dtr = dtr.predict(X_test)
mse_dtr = mean_squared_error(y_test, y_pred_dtr)
r2_dtr = r2_score(y_test, y_pred_dtr)
print("\nDecision Tree Regression Results:")
print("MSE:", mse_dtr)
print("R2:", r2_dtr)
# Visualize Decision Tree Regression predictions
plt.scatter(y_test, y_pred_dtr, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Decision Tree Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
from sklearn.linear_model import Lasso, Ridge
# Lasso Regression model
lasso = Lasso(alpha=0.1, max_iter=10000)
lasso.fit(X_train, y_train)
y_pred_lasso = lasso.predict(X_test)
mse_lasso = mean_squared_error(y_test, y_pred_lasso)
r2_lasso = r2_score(y_test, y_pred_lasso)
# Print results
print("Lasso Regression Results:")
print("MSE:", mse_lasso)
print("R2:", r2_lasso)
# Visualize Lasso Regression predictions
plt.scatter(y_test, y_pred_lasso, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Lasso Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
# Ridge Regression model
ridge = Ridge(alpha=0.1, max_iter=10000)
ridge.fit(X_train, y_train)
y_pred_ridge = ridge.predict(X_test)
mse_ridge = mean_squared_error(y_test, y_pred_ridge)
r2_ridge = r2_score(y_test, y_pred_ridge)
# Print results
print("\nRidge Regression Results:")
print("MSE:", mse_ridge)
print("R2:", r2_ridge)
# Visualize Ridge Regression predictions
plt.scatter(y_test, y_pred_ridge, color="blue", label="Predicted")
plt.scatter(y_test, y_test, color="red", label="Actual")
plt.title("Ridge Regression Model")
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.legend()
plt.show()
| false | 0 | 1,998 | 0 | 1,998 | 1,998 |
||
129014806
|
# ## Testing Repository:
# https://github.com/UNSW-CEEM/Solar-Curtailment
from solarcurtailment import curtailment_calculation
# MODIFY THE FILE_PATH ACCORDING TO YOUR DIRECTORY FOR SAVING THE DATA FILES.
file_path = r"/kaggle/input/solarunsw/Data" # for running in Samhan's laptop
# file_path = r"C:\Users\samha\Documents\CANVAS\data" #for running in TETB CEEM09 computer
# These samples represent (consecutively) tripping curtailment in a non clear sky day, tripping curtailment in a clear sky
# day, vvar curtailment, vwatt curtailment, incomplete datasample, and sample without curtailment.
for i in [1, 11, 14, 4, 5, 9]:
sample_number = i
print("Analyzing sample number {}".format(i))
data_file = "/data_sample_{}.csv".format(sample_number)
ghi_file = "/ghi_sample_{}.csv".format(sample_number)
curtailment_calculation.compute(file_path, data_file, ghi_file)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/014/129014806.ipynb
| null | null |
[{"Id": 129014806, "ScriptId": 38346742, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1751233, "CreationDate": "05/10/2023 10:20:20", "VersionNumber": 1.0, "Title": "UNSW-CEEM-Solar-Curtailment", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 21.0, "LinesInsertedFromPrevious": 21.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
# ## Testing Repository:
# https://github.com/UNSW-CEEM/Solar-Curtailment
from solarcurtailment import curtailment_calculation
# MODIFY THE FILE_PATH ACCORDING TO YOUR DIRECTORY FOR SAVING THE DATA FILES.
file_path = r"/kaggle/input/solarunsw/Data" # for running in Samhan's laptop
# file_path = r"C:\Users\samha\Documents\CANVAS\data" #for running in TETB CEEM09 computer
# These samples represent (consecutively) tripping curtailment in a non clear sky day, tripping curtailment in a clear sky
# day, vvar curtailment, vwatt curtailment, incomplete datasample, and sample without curtailment.
for i in [1, 11, 14, 4, 5, 9]:
sample_number = i
print("Analyzing sample number {}".format(i))
data_file = "/data_sample_{}.csv".format(sample_number)
ghi_file = "/ghi_sample_{}.csv".format(sample_number)
curtailment_calculation.compute(file_path, data_file, ghi_file)
| false | 0 | 287 | 0 | 287 | 287 |
||
129014335
|
<jupyter_start><jupyter_text>Iris Species
The Iris dataset was used in R.A. Fisher's classic 1936 paper, [The Use of Multiple Measurements in Taxonomic Problems](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf), and can also be found on the [UCI Machine Learning Repository][1].
It includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other.
The columns in this dataset are:
- Id
- SepalLengthCm
- SepalWidthCm
- PetalLengthCm
- PetalWidthCm
- Species
[](https://www.kaggle.com/benhamner/d/uciml/iris/sepal-width-vs-length)
[1]: http://archive.ics.uci.edu/ml/
Kaggle dataset identifier: iris
<jupyter_script># DATASET INFORMATION :-
# The Iris Dataset contains four features (length and width of sepals and petals) of 50 samples of three species of Iris (Iris setosa,Iris versicolor and Iris virginica).
#
# 
# ATTRIBUTE INFORMATION :-
# 1.sepal length in cm
# 2.sepal width in cm
# 3.petal length in cm
# 4.petal width in cm
# 5.species - Iris Setosa, Iris Versicolor, Iris Virginica
#
# 
#
#
#
# The goal is to classify iris flowers among three species (setosa, versicolor, virginica) from the measurment of sepal lenght, sepal width, petal length, petal width.
# The main goal is to design a model that will predict the output class for new flowers based on the input attributes.
# **IMPORT NECESSORY LIBRARIES**
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#
# **LOADING DATASET**
# to read the data
iris = pd.read_csv("/kaggle/input/iris/Iris.csv")
iris.head()
# to delet the id column
df = iris.drop(["Id"], axis=1)
df.head()
# to display statistics about data
df.describe()
# to display basic info about datatype
df.info()
# to display no of samples of each class
df["Species"].value_counts()
# **EXPLORATORY DATA ANALYSIS**
# Exploratory data analysis (EDA) is used for analyzing and investigating the data sets and summarize their main characteristics, wiht the help of data visualization methods.
# visualise the whole dataset
sns.pairplot(df, hue="Species")
# **Conclusions from graph:-**
# 1.Iris setosa have highets sepal length and petal width.
# 2.Iris virginica have lowest petal length and petal width.
# 3.Iris setosa is clearly seperable from other two species.
#
#
# **PREPROCESSING THE DATASET**
# Data preprocessing is a process of preparing the raw data and making it suitable for a machine learning model.
# to check for null values
df.isnull().sum()
# **LABEL ENCODER**
# Label encoder is used for converting label into numeric form so as to convert it into machine redable form.This will covert species names Iris-setosa,Iris-versicolor and Iris-virginica to 0,1 and 2 respectively.
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df["Species"] = le.fit_transform(df["Species"])
print(df.head())
print(df[50:55])
print(df.tail())
# **SEPERATING THE INPUT COLUMNS AND OUTPUT COLUMNS**
# seperate input and output variables
# input variables = X = SepalLength, SepalWidth, PetalLength, PetalWidth
# output variables = y = Species
X = df.drop(columns=["Species"])
y = df["Species"]
# print(X.head())
# print(y.head())
# **SPLITTING THE DATA INTO TRAINING AND TESTING**
# split the data into train and test dataset
# train - 70
# test - 30
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
# **MODEL 1 -LOGISTIC REGRESSION**
# importing model
from sklearn.linear_model import LogisticRegression
model_LR = LogisticRegression()
# model training
model_LR.fit(X_train, y_train)
# calculate the accuracy
predictionLR = model_LR.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, predictionLR) * 100)
# **MODEL 2 -SUPPORT VECTOR MACHINE**
# import model
from sklearn.svm import SVC
model_SVC = SVC()
# model training
model_SVC.fit(X_train, y_train)
# calculate the accuracy
predictionSVC = model_SVC.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, predictionSVC) * 100)
# **MODEL 3 - DECISION TREE CLASSIFIER**
# import model
from sklearn.tree import DecisionTreeClassifier
model_DTC = DecisionTreeClassifier()
# model training
model_DTC.fit(X_train, y_train)
# calculate the accuracy
predictionDTC = model_DTC.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, predictionDTC) * 100)
# **NEW INPUT**
X_new = np.array(
[
[3.4, 2.2, 1.5, 0.5],
[
4.8,
2.3,
3.7,
1.3,
],
[5.1, 2.6, 4.9, 2],
]
)
prediction_new = model_SVC.predict(X_new)
print("Prediction of new species : {}".format(prediction_new))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/014/129014335.ipynb
|
iris
| null |
[{"Id": 129014335, "ScriptId": 38051781, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14682564, "CreationDate": "05/10/2023 10:15:42", "VersionNumber": 5.0, "Title": "IRIS dataset classification", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 146.0, "LinesInsertedFromPrevious": 59.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 87.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184696176, "KernelVersionId": 129014335, "SourceDatasetVersionId": 420}]
|
[{"Id": 420, "DatasetId": 19, "DatasourceVersionId": 420, "CreatorUserId": 1, "LicenseName": "CC0: Public Domain", "CreationDate": "09/27/2016 07:38:05", "VersionNumber": 2.0, "Title": "Iris Species", "Slug": "iris", "Subtitle": "Classify iris plants into three species in this classic dataset", "Description": "The Iris dataset was used in R.A. Fisher's classic 1936 paper, [The Use of Multiple Measurements in Taxonomic Problems](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf), and can also be found on the [UCI Machine Learning Repository][1].\n\nIt includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other.\n\nThe columns in this dataset are:\n\n - Id\n - SepalLengthCm\n - SepalWidthCm\n - PetalLengthCm\n - PetalWidthCm\n - Species\n\n[](https://www.kaggle.com/benhamner/d/uciml/iris/sepal-width-vs-length)\n\n\n [1]: http://archive.ics.uci.edu/ml/", "VersionNotes": "Republishing files so they're formally in our system", "TotalCompressedBytes": 15347.0, "TotalUncompressedBytes": 15347.0}]
|
[{"Id": 19, "CreatorUserId": 1, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 420.0, "CurrentDatasourceVersionId": 420.0, "ForumId": 997, "Type": 2, "CreationDate": "01/12/2016 00:33:31", "LastActivityDate": "02/06/2018", "TotalViews": 1637863, "TotalDownloads": 423540, "TotalVotes": 3416, "TotalKernels": 6420}]
| null |
# DATASET INFORMATION :-
# The Iris Dataset contains four features (length and width of sepals and petals) of 50 samples of three species of Iris (Iris setosa,Iris versicolor and Iris virginica).
#
# 
# ATTRIBUTE INFORMATION :-
# 1.sepal length in cm
# 2.sepal width in cm
# 3.petal length in cm
# 4.petal width in cm
# 5.species - Iris Setosa, Iris Versicolor, Iris Virginica
#
# 
#
#
#
# The goal is to classify iris flowers among three species (setosa, versicolor, virginica) from the measurment of sepal lenght, sepal width, petal length, petal width.
# The main goal is to design a model that will predict the output class for new flowers based on the input attributes.
# **IMPORT NECESSORY LIBRARIES**
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#
# **LOADING DATASET**
# to read the data
iris = pd.read_csv("/kaggle/input/iris/Iris.csv")
iris.head()
# to delet the id column
df = iris.drop(["Id"], axis=1)
df.head()
# to display statistics about data
df.describe()
# to display basic info about datatype
df.info()
# to display no of samples of each class
df["Species"].value_counts()
# **EXPLORATORY DATA ANALYSIS**
# Exploratory data analysis (EDA) is used for analyzing and investigating the data sets and summarize their main characteristics, wiht the help of data visualization methods.
# visualise the whole dataset
sns.pairplot(df, hue="Species")
# **Conclusions from graph:-**
# 1.Iris setosa have highets sepal length and petal width.
# 2.Iris virginica have lowest petal length and petal width.
# 3.Iris setosa is clearly seperable from other two species.
#
#
# **PREPROCESSING THE DATASET**
# Data preprocessing is a process of preparing the raw data and making it suitable for a machine learning model.
# to check for null values
df.isnull().sum()
# **LABEL ENCODER**
# Label encoder is used for converting label into numeric form so as to convert it into machine redable form.This will covert species names Iris-setosa,Iris-versicolor and Iris-virginica to 0,1 and 2 respectively.
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df["Species"] = le.fit_transform(df["Species"])
print(df.head())
print(df[50:55])
print(df.tail())
# **SEPERATING THE INPUT COLUMNS AND OUTPUT COLUMNS**
# seperate input and output variables
# input variables = X = SepalLength, SepalWidth, PetalLength, PetalWidth
# output variables = y = Species
X = df.drop(columns=["Species"])
y = df["Species"]
# print(X.head())
# print(y.head())
# **SPLITTING THE DATA INTO TRAINING AND TESTING**
# split the data into train and test dataset
# train - 70
# test - 30
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
# **MODEL 1 -LOGISTIC REGRESSION**
# importing model
from sklearn.linear_model import LogisticRegression
model_LR = LogisticRegression()
# model training
model_LR.fit(X_train, y_train)
# calculate the accuracy
predictionLR = model_LR.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, predictionLR) * 100)
# **MODEL 2 -SUPPORT VECTOR MACHINE**
# import model
from sklearn.svm import SVC
model_SVC = SVC()
# model training
model_SVC.fit(X_train, y_train)
# calculate the accuracy
predictionSVC = model_SVC.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, predictionSVC) * 100)
# **MODEL 3 - DECISION TREE CLASSIFIER**
# import model
from sklearn.tree import DecisionTreeClassifier
model_DTC = DecisionTreeClassifier()
# model training
model_DTC.fit(X_train, y_train)
# calculate the accuracy
predictionDTC = model_DTC.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, predictionDTC) * 100)
# **NEW INPUT**
X_new = np.array(
[
[3.4, 2.2, 1.5, 0.5],
[
4.8,
2.3,
3.7,
1.3,
],
[5.1, 2.6, 4.9, 2],
]
)
prediction_new = model_SVC.predict(X_new)
print("Prediction of new species : {}".format(prediction_new))
| false | 0 | 1,407 | 0 | 1,705 | 1,407 |
||
129014537
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from IPython.display import clear_output
#
clear_output()
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold, train_test_split
# 원핫인코딩할때 썼던 거(label encoder)랑 정확도 측정때 사용했던 것들(accuracy_score)? k겹 교차검증(kFold)에 test데이터 스플릿함수 import(자세힌 기억이...)
from lightgbm import LGBMClassifier # LGBMClassifier가 지금 전체적으로 오류남.
import lazypredict
# Lazy predict - 튜닝 없이 Basic 모델을 이용했을 때, 어떤 모델을 이용하면 좋은 결과를 얻을 수 있는지 확인할 수 있다. 단, 튜닝이 없기에 맹신은 금물.
from lazypredict.Supervised import LazyClassifier # 얘도 마찬가지로 오류남.
import time
import warnings
warnings.filterwarnings("ignore")
train = pd.read_csv("/kaggle/input/spaceship-titanic/sample_submission.csv")
test = pd.read_csv("/kaggle/input/spaceship-titanic/train.csv")
submission = pd.read_csv("/kaggle/input/spaceship-titanic/test.csv")
RANDOM_STATE = 12
FOLDS = 5
STRATEGY = "median"
train.head()
print(f"\033[94mNumber of rows in train data: {train.shape[0]}")
print(f"\033[94mNumber of columns in train data: {train.shape[1]}")
print(f"\033[94mNumber of values in train data: {train.count().sum()}")
print(f"\033[94mNumber missing values in train data: {sum(train.isna().sum())}")
# format해서 출력하는 형태. 그냥 shape만 써도 충분하긴 할거 같음.
print(f"\033[94m")
print(train.isna().sum().sort_values(ascending=False))
# 결측값 확인
train.describe()
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/014/129014537.ipynb
| null | null |
[{"Id": 129014537, "ScriptId": 38351054, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5365843, "CreationDate": "05/10/2023 10:17:38", "VersionNumber": 1.0, "Title": "spaceship titanic", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 112.0, "LinesInsertedFromPrevious": 112.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
| null | null | null | null |
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from IPython.display import clear_output
#
clear_output()
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold, train_test_split
# 원핫인코딩할때 썼던 거(label encoder)랑 정확도 측정때 사용했던 것들(accuracy_score)? k겹 교차검증(kFold)에 test데이터 스플릿함수 import(자세힌 기억이...)
from lightgbm import LGBMClassifier # LGBMClassifier가 지금 전체적으로 오류남.
import lazypredict
# Lazy predict - 튜닝 없이 Basic 모델을 이용했을 때, 어떤 모델을 이용하면 좋은 결과를 얻을 수 있는지 확인할 수 있다. 단, 튜닝이 없기에 맹신은 금물.
from lazypredict.Supervised import LazyClassifier # 얘도 마찬가지로 오류남.
import time
import warnings
warnings.filterwarnings("ignore")
train = pd.read_csv("/kaggle/input/spaceship-titanic/sample_submission.csv")
test = pd.read_csv("/kaggle/input/spaceship-titanic/train.csv")
submission = pd.read_csv("/kaggle/input/spaceship-titanic/test.csv")
RANDOM_STATE = 12
FOLDS = 5
STRATEGY = "median"
train.head()
print(f"\033[94mNumber of rows in train data: {train.shape[0]}")
print(f"\033[94mNumber of columns in train data: {train.shape[1]}")
print(f"\033[94mNumber of values in train data: {train.count().sum()}")
print(f"\033[94mNumber missing values in train data: {sum(train.isna().sum())}")
# format해서 출력하는 형태. 그냥 shape만 써도 충분하긴 할거 같음.
print(f"\033[94m")
print(train.isna().sum().sort_values(ascending=False))
# 결측값 확인
train.describe()
| false | 0 | 794 | 0 | 794 | 794 |
||
129014767
|
# # **[www.ybifoundation.org](https://www.ybifoundation.org/)**
# # **Create Pandas Series**
# import library
import pandas as pd
import numpy as np
# create series from list
pd.Series([1, 2, 3, 4, 5])
# create series from list
list = [1, 2, 3, 4, 5]
series = pd.Series(list)
series
# create series from array
array = np.array([10, 20, 30, 40, 50])
series = pd.Series(array)
series
# create series from dictionary
dict = {"a": 10, "b": 20, "d": 30}
series = pd.Series(dict)
series
# create series from range function
series = pd.Series(range(15))
series
# # **Create Pandas DataFrame**
# create dataframe from list
pd.DataFrame([1, 2, 3, 4, 5])
# create dataframe from list with row and column index
pd.DataFrame([1, 2, 3, 4, 5], index=["a", "b", "c", "d", "e"], columns=["Number"])
# create dataframe with multiple columns
list = [["a", 25], ["b", 30], ["c", 26], ["d", 22]]
pd.DataFrame(list, columns=["Alphabet", "Number"])
# create dataframe from dictionary
dict = {"Alphabet": ["a", "b", "c", "d", "e"], "Number": [1, 2, 3, 4, 5]}
pd.DataFrame(dict)
# create dataframe from array
array = np.array([1, 2, 3, 4, 5])
pd.DataFrame(array, columns=["Number"])
# create dataframe from array
array = np.array([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25]])
pd.DataFrame(array, columns=["Number", "Square"])
# create dataframe from range function
pd.DataFrame(range(5))
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/014/129014767.ipynb
| null | null |
[{"Id": 129014767, "ScriptId": 38351165, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8791705, "CreationDate": "05/10/2023 10:20:04", "VersionNumber": 1.0, "Title": "Create Series & Pandas DataFrame", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 70.0, "LinesInsertedFromPrevious": 70.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 664}]
| null | null | null | null |
# # **[www.ybifoundation.org](https://www.ybifoundation.org/)**
# # **Create Pandas Series**
# import library
import pandas as pd
import numpy as np
# create series from list
pd.Series([1, 2, 3, 4, 5])
# create series from list
list = [1, 2, 3, 4, 5]
series = pd.Series(list)
series
# create series from array
array = np.array([10, 20, 30, 40, 50])
series = pd.Series(array)
series
# create series from dictionary
dict = {"a": 10, "b": 20, "d": 30}
series = pd.Series(dict)
series
# create series from range function
series = pd.Series(range(15))
series
# # **Create Pandas DataFrame**
# create dataframe from list
pd.DataFrame([1, 2, 3, 4, 5])
# create dataframe from list with row and column index
pd.DataFrame([1, 2, 3, 4, 5], index=["a", "b", "c", "d", "e"], columns=["Number"])
# create dataframe with multiple columns
list = [["a", 25], ["b", 30], ["c", 26], ["d", 22]]
pd.DataFrame(list, columns=["Alphabet", "Number"])
# create dataframe from dictionary
dict = {"Alphabet": ["a", "b", "c", "d", "e"], "Number": [1, 2, 3, 4, 5]}
pd.DataFrame(dict)
# create dataframe from array
array = np.array([1, 2, 3, 4, 5])
pd.DataFrame(array, columns=["Number"])
# create dataframe from array
array = np.array([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25]])
pd.DataFrame(array, columns=["Number", "Square"])
# create dataframe from range function
pd.DataFrame(range(5))
| false | 0 | 514 | 664 | 514 | 514 |
||
129063346
|
<jupyter_start><jupyter_text>Commodity Prices Dataset
This dataset contains monthly historical prices of 10 different commodities from January 1980 to April 2023. The data was collected from the Alpha Vantage API using Python. The commodities included in the dataset are **WTI crude oil, cotton, natural gas, coffee, sugar, aluminum, Brent crude oil, corn, copper, and wheat**. Prices are reported in USD per unit of measurement for each commodity. The dataset contains **520 rows and 12 columns**, with each row representing a monthly observation of the prices of the 10 commodities. The 'All_Commodities' column is new.
**WTI**: WTI crude oil price per unit of measurement (USD).
**COTTON**: Cotton price per unit of measurement (USD).
**NATURAL_GAS**: Natural gas price per unit of measurement (USD).
**ALL_COMMODITIES**: A composite index that represents the average price of all 10 commodities in the dataset, weighted by their individual market capitalizations. Prices are reported in USD per unit of measurement.
**COFFEE**: Coffee price per unit of measurement (USD).
**SUGAR**: Sugar price per unit of measurement (USD).
**ALUMINUM**: Aluminum price per unit of measurement (USD).
**BRENT**: Brent crude oil price per unit of measurement (USD).
**CORN**: Corn price per unit of measurement (USD).
**COPPER**: Copper price per unit of measurement (USD).
**WHEAT**: Wheat price per unit of measurement (USD).
Note that some values are missing in the dataset, represented by NaN. These missing values occur for some of the commodities in the earlier years of the dataset.
It may be useful for time series analysis and predictive modeling.
NaN values were included so that you as a Data Scientist can get some practice on dealing with NaN values.
https://www.alphavantage.co/documentation/
Kaggle dataset identifier: commodity-prices-dataset
<jupyter_script># # Making the 'Commodity Dataset w/AlphaVantage API
# This Python code imports the `load_dotenv()` function from the `dotenv` package, which is used to load environment variables from a `.env` file. It also imports the `os` and `requests` packages.
# The code loads environment variables from the `.env` file using the `load_dotenv()` function. It then retrieves the value of the `ALPHA_VANTAGE_API_KEY` variable from the environment variables using the `os.environ.get()` function.
# Next, the code makes an API request to the Alpha Vantage API using the retrieved API key and the `requests.get()` function. The response data is then parsed as a JSON object using the `.json()` method.
# Finally, the code prints the parsed JSON data to the console using the `print()` function.
# # This code defines a `Commodity` class
# This code defines a `Commodity` class that provides methods to retrieve and transform commodity market data, and to save the data to a CSV file. The class constructor takes several optional parameters, including `outputsize`, `datatype`, and `api_key`. The `outputsize` parameter determines whether the API will return daily or monthly historical data for a commodity. The `datatype` parameter specifies whether the API will return data in JSON or CSV format. If `api_key` is not provided, the constructor will use the 'demo' key provided by the API.
# The `_get_data` method sends a GET request to the Alpha Vantage API to retrieve commodity market data. The method takes two optional parameters, `function` and `interval`. `function` specifies the type of data to be retrieved, such as intraday prices or historical prices. `interval` specifies the time interval for the data, such as daily or weekly. The `_get_data` method returns a JSON object containing the requested data.
# The `_transform_data_to_df` method transforms the data returned by `_get_data` into a Pandas dataframe. The method takes the JSON object returned by `_get_data` as an input. The method converts the date column to a `datetime` format, the value column to numeric format, and sets the date column as the dataframe index.
# The `to_csv` method saves the commodity market data to a CSV file. The method takes two parameters, `data` and `file_path`. If `data` is already a Pandas dataframe, the method saves it to a CSV file specified by `file_path`. Otherwise, the method uses the `_transform_data_to_df` method to transform the data to a Pandas dataframe and then saves it to a CSV file specified by `file_path`. If an error occurs while saving the data, an error message is printed to the console.
import pandas as pd
import numpy as np
import os
import requests
api_key = os.environ.get("ALPHA_VANTAGE_API_KEY")
class Commodity:
API_BASE_URL = "https://www.alphavantage.co/query"
"""
__init__(self, outputsize='compact', datatype='json', api_key=None) -
Initializes a Commodity object with default values for outputsize,
datatype, and api_key unless otherwise specified.
"""
def __init__(self, outputsize="compact", datatype="json", api_key=None):
if api_key is None:
api_key = "demo"
self.outputsize = outputsize
self.datatype = datatype
self.api_key = api_key
def _get_data(self, function, interval="daily", datatype="json"):
params = {
"function": function,
"interval": interval,
"datatype": datatype,
"apikey": self.api_key,
}
try:
response = requests.get(self.API_BASE_URL, params=params)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
return None
return response.json()
def _transform_data_to_df(self, data):
try:
df = pd.DataFrame(data["data"], columns=["date", "value"])
df["date"] = pd.to_datetime(df["date"])
df["value"] = df["value"].replace(".", np.nan)
df["value"] = pd.to_numeric(df["value"])
df.set_index("date", inplace=True)
return df
except (KeyError, TypeError) as e:
print(f"Error occurred while transforming data to dataframe: {e}")
return None
def to_csv(self, data, file_path):
try:
if isinstance(data, pd.DataFrame):
data.to_csv(file_path)
else:
df = self._transform_data_to_df(data)
df.to_csv(file_path)
print(f"Data saved to {file_path}")
except Exception as e:
print(f"Error saving data to {file_path}: {e}")
commodity = Commodity(api_key="your_api_key_here")
data = commodity._get_data("REAL_GDP", interval="monthly", datatype="json")
print(data)
# # This code makes use of the Commodity class and the Alpha Vantage API to retrieve data
# This code makes use of the Commodity class and the Alpha Vantage API to retrieve data for various commodities. The `COMMODITY_DICT` variable stores a set of commodity names, and a loop iterates through each name in the set.
# For each commodity, the `_get_data()` method of the `Commodity` class is called with the commodity name, an interval of 'monthly', and a datatype of 'json'. The resulting data is then printed to the console.
# After retrieving the data, it is saved to a JSON file with the same name as the commodity, using the `json.dump()` method.
# The variable `calls_count` is used to keep track of the number of API calls made. If `calls_count` reaches 5, the program sleeps for 5 minutes using the `time.sleep()` function before continuing. This is to avoid hitting the Alpha Vantage API rate limit.
import time
import json
commodity = Commodity(api_key="your_api_key_here")
COMMODITY_DICT = {
"WTI",
"BRENT",
"NATURAL_GAS",
"COPPER",
"ALUMINUM",
"WHEAT",
"CORN",
"COTTON",
"SUGAR",
"COFFEE",
"ALL_COMMODITIES",
}
calls_count = 0
for commodity_name in COMMODITY_DICT:
data = commodity._get_data(commodity_name, interval="monthly", datatype="json")
print(data)
# Save the data to a JSON file
with open(f"{commodity_name}.json", "w") as f:
json.dump(data, f)
calls_count += 1
if calls_count == 5:
time.sleep(300) # sleep for 5 minutes
calls_count = 0
# # TThis code reads in JSON files containing commodity data
# This code reads in JSON files containing commodity data, converts them to pandas data frames, and creates a dictionary of data frames for each commodity.
# First, the code defines a `folder_path` variable that specifies the directory where the JSON files are located. It also creates a dictionary `commodity_files` that maps commodity names to their corresponding file names.
# Next, the code initializes an empty dictionary `dfs` to store the data frames for each commodity. Then, it loops over each commodity in the `commodity_files` dictionary, reads in the data from the corresponding JSON file using the `json.load` function, and converts it to a pandas data frame. The data frame is then cleaned by converting the `date` column to a datetime format and converting the `value` column to numeric format. Finally, the cleaned data frame is added to the `dfs` dictionary using the commodity name as the key.
# After the loop completes, the `dfs` dictionary contains a data frame for each commodity, and you can access individual data frames using their commodity names as keys. The code prints the data frame for the 'WTI' commodity as an example.
import os
import pandas as pd
# Path to the folder where the JSON files are saved
folder_path = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/JSON"
# Dictionary mapping commodity names to file names
commodity_files = {
"WTI": "WTI.json",
"BRENT": "BRENT.json",
"NATURAL_GAS": "NATURAL_GAS.json",
"COPPER": "COPPER.json",
"ALUMINUM": "ALUMINUM.json",
"WHEAT": "WHEAT.json",
"CORN": "CORN.json",
"COTTON": "COTTON.json",
"SUGAR": "SUGAR.json",
"COFFEE": "COFFEE.json",
"ALL_COMMODITIES": "ALL_COMMODITIES.json",
}
# Loop over the commodities and read in the data from the files
dfs = {}
for commodity, file_name in commodity_files.items():
file_path = os.path.join(folder_path, file_name)
with open(file_path, "r") as f:
data = json.load(f)
df = pd.DataFrame(data["data"], columns=["date", "value"])
df["date"] = pd.to_datetime(df["date"])
df["value"] = df["value"].replace(".", np.nan)
df["value"] = pd.to_numeric(df["value"])
df.set_index("date", inplace=True)
dfs[commodity] = df
# Now you have a dictionary of data frames, one for each commodity
print(dfs["WTI"])
dfs
# # This script loads all JSON files in a specified folder and creates a merged Pandas data frame from them. Each JSON file corresponds to a different commodity, so the script creates a separate column in the data frame for each commodity's price data. Here's a breakdown of what the code does:
# 1. Set the folder path where the JSON files are stored.
# 2. Create an empty list to store the data frames.
# 3. Loop through all files in the folder that end with ".json".
# 4. Load the JSON data from each file into a Pandas data frame.
# 5. Rename the "value" column to the name of the commodity (which is the file name without the ".json" extension).
# 6. Append each data frame to the list.
# 7. Merge all the data frames in the list into one data frame, using the "date" column as the join key.
# 8. Reset the index of the merged data frame and rename the columns.
# 9. Print the merged data frame.
# Overall, this script provides a useful way to merge multiple commodity price data sets into a single data frame for further analysis.
import os
import pandas as pd
# Set folder path
folder_path = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/JSON"
# Create an empty list to store the data frames
df_list = []
# Loop through all the files in the folder
for file_name in os.listdir(folder_path):
if file_name.endswith(".json"):
# Get the full file path
file_path = os.path.join(folder_path, file_name)
# Load the JSON data from the file
with open(file_path, "r") as f:
data = json.load(f)
# Create a data frame from the JSON data
df = pd.DataFrame(data["data"], columns=["date", "value"])
df["date"] = pd.to_datetime(df["date"])
df["value"] = pd.to_numeric(df["value"], errors="coerce")
df.set_index("date", inplace=True)
# Rename the value column to the commodity name
commodity = file_name[:-5]
df.rename(columns={"value": commodity}, inplace=True)
# Add the data frame to the list
df_list.append(df)
# Merge all the data frames into one
merged_df = pd.merge(df_list[0], df_list[1], on="date", how="outer")
for df in df_list[2:]:
merged_df = pd.merge(merged_df, df, on="date", how="outer")
# Reset the index and rename the columns
merged_df = merged_df.reset_index()
merged_df = merged_df.rename(columns={"index": "date"})
# Print the merged data frame
print(merged_df)
merged_df
merged_df.columns
# # Save data to a csv file
# save dataframe to csv file
merged_df.to_csv("COMMODITY_DATA.csv", index=False)
# # Create Train test split csv files
from sklearn.model_selection import train_test_split
# Assuming your dataset is stored in a pandas dataframe called `commodities_df`
# Split the data into training and testing sets
train_df, test_df = train_test_split(merged_df, test_size=0.2, random_state=42)
# Write the training set to a CSV file
train_df.to_csv("train.csv", index=False)
# Write the testing set to a CSV file
test_df.to_csv("test.csv", index=False)
# # Import packages so that we can explore data
# load packages
import sys # access to system parameters https://docs.python.org/3/library/sys.html
print("Python version: {}".format(sys.version))
import pandas as pd # collection of functions for data processing and analysis modeled after R dataframes with SQL like features
print("pandas version: {}".format(pd.__version__))
import matplotlib # collection of functions for scientific and publication-ready visualization
print("matplotlib version: {}".format(matplotlib.__version__))
import numpy as np # foundational package for scientific computing
print("NumPy version: {}".format(np.__version__))
import scipy as sp # collection of functions for scientific computing and advance mathematics
print("SciPy version: {}".format(sp.__version__))
import IPython
from IPython import display # pretty printing of dataframes in Jupyter notebook
from IPython.display import HTML, display
print("IPython version: {}".format(IPython.__version__))
import sklearn # collection of machine learning algorithms
print("scikit-learn version: {}".format(sklearn.__version__))
# misc libraries
import random
import time
# ignore warnings
import warnings
warnings.filterwarnings("ignore")
print("-" * 25)
# # Common ML Algorithms
# Common Model Algorithms
from sklearn import (
svm,
tree,
linear_model,
neighbors,
naive_bayes,
ensemble,
discriminant_analysis,
gaussian_process,
)
# Common Model Helpers
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn import feature_selection
from sklearn import model_selection
from sklearn import metrics
# Visualization
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
from pandas.plotting import scatter_matrix
# Configure Visualization Defaults
# %matplotlib inline = show plots in Jupyter Notebook browser
mpl.style.use("ggplot")
sns.set_style("white")
sns.set_style("darkgrid")
pylab.rcParams["figure.figsize"] = 12, 8
print("Seaborn version: {}".format(sns.__version__))
# # Import data into variables
# PATH = "/kaggle/input/playground-series-s3e14"
TRAIN_FILENAME = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/train.csv"
TEST_FILENAME = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/test.csv"
train_data = pd.read_csv(TRAIN_FILENAME)
print(train_data.shape)
print("-" * 50)
test_data = pd.read_csv(TEST_FILENAME)
print(test_data.shape)
print("-" * 50)
# # Implementation of a Data Explorer Class so it can be easier to explore our data
class DataExplorer:
def __init__(self, data):
self.data = data
def print_info(self):
try:
print(self.data.info())
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_null_values(self):
try:
if self.data.isnull().values.any():
print("Null values found.")
null_cols = self.data.columns[self.data.isnull().any()]
for col in null_cols:
print(
f"{col} column has {self.data[col].isnull().sum()} null values."
)
else:
print("No null values found.")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_duplicate_rows(self):
try:
if self.data.duplicated().sum() > 0:
print(f"{self.data.duplicated().sum()} duplicate rows found.")
else:
print("No duplicate rows found.")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_data_shape(self):
try:
print(f"Number of rows: {self.data.shape[0]}")
print(f"Number of columns: {self.data.shape[1]}")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def describe_numeric_features(self):
try:
numeric_cols = self.data.select_dtypes(
include=["float64", "int64"]
).columns.tolist()
if not numeric_cols:
print("No numeric columns found.")
else:
print(self.data[numeric_cols].describe())
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def describe_categorical_features(self):
try:
categorical_cols = self.data.select_dtypes(
include=["object"]
).columns.tolist()
if not categorical_cols:
print("No categorical columns found.")
else:
print(self.data[categorical_cols].describe())
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_duplicates(self):
try:
print(f"Number of duplicate rows: {self.data.duplicated().sum()}")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_columns(self):
try:
print(self.data.columns)
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_data_types(self):
try:
print(self.data.dtypes)
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def drop_duplicates(self):
try:
self.data.drop_duplicates(inplace=True)
if self.data.duplicated().sum() == 0:
print("No duplicates found.")
else:
print("Duplicates dropped.")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def drop_column(self, column):
try:
self.data.drop(column, axis=1, inplace=True)
print(f"Column {column} dropped")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
def fill_null(self, column, value):
try:
self.data[column].fillna(value, inplace=True)
print(f"Null values in {column} filled with {value}")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
def clean_data(self):
try:
self.check_null_values()
self.drop_duplicates()
if "id" in self.data.columns:
self.drop_column("id")
print("Data cleaned")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
def get_min(self):
try:
for col in self.data.columns:
print(f"Minimum value in {col}: {self.data[col].min()}")
except TypeError:
print("Columns must contain numeric values.")
except:
print("An error occurred while getting the minimum value.")
finally:
print("-" * 50)
def get_max(self):
numeric_cols = self.data.select_dtypes(
include=["float64", "int64"]
).columns.tolist()
if not numeric_cols:
print("No numeric columns found.")
else:
for col in numeric_cols:
try:
print(f"Maximum value in {col}: {self.data[col].max()}")
except TypeError:
print(f"Column {col} must contain numeric values.")
except:
print(
f"An error occurred while getting the maximum value for column {col}."
)
finally:
print("-" * 50)
train_data_explorer = DataExplorer(train_data)
train_data_explorer.check_null_values()
train_data_explorer.check_columns()
train_data_explorer.drop_column("date")
train_data_explorer.describe_numeric_features()
train_data_explorer.print_info()
train_data
# # Feature Correlation
# correlation heatmap of dataset
def correlation_heatmap(df):
_, ax = plt.subplots(figsize=(14, 12), dpi=300)
colormap = sns.diverging_palette(220, 10, as_cmap=True)
_ = sns.heatmap(
df.corr(),
cmap="icefire",
square=True,
cbar_kws={"shrink": 0.5},
ax=ax,
annot=True,
linewidths=0.1,
vmax=1.0,
linecolor="black",
annot_kws={"fontsize": 12},
)
plt.title("Pearson Correlation of Features for Training Dataset", y=1.05, size=15)
correlation_heatmap(train_data)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/063/129063346.ipynb
|
commodity-prices-dataset
|
richeyjay
|
[{"Id": 129063346, "ScriptId": 38284641, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11109722, "CreationDate": "05/10/2023 17:15:48", "VersionNumber": 3.0, "Title": "Commodities", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 541.0, "LinesInsertedFromPrevious": 278.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 263.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184788309, "KernelVersionId": 129063346, "SourceDatasetVersionId": 5657366}]
|
[{"Id": 5657366, "DatasetId": 3240761, "DatasourceVersionId": 5732767, "CreatorUserId": 11109722, "LicenseName": "Other (specified in description)", "CreationDate": "05/10/2023 16:48:41", "VersionNumber": 3.0, "Title": "Commodity Prices Dataset", "Slug": "commodity-prices-dataset", "Subtitle": "The Power of Commodity Prices: A Comprehensive Dataset for Commodities", "Description": "This dataset contains monthly historical prices of 10 different commodities from January 1980 to April 2023. The data was collected from the Alpha Vantage API using Python. The commodities included in the dataset are **WTI crude oil, cotton, natural gas, coffee, sugar, aluminum, Brent crude oil, corn, copper, and wheat**. Prices are reported in USD per unit of measurement for each commodity. The dataset contains **520 rows and 12 columns**, with each row representing a monthly observation of the prices of the 10 commodities. The 'All_Commodities' column is new.\n\n**WTI**: WTI crude oil price per unit of measurement (USD).\n**COTTON**: Cotton price per unit of measurement (USD).\n**NATURAL_GAS**: Natural gas price per unit of measurement (USD).\n**ALL_COMMODITIES**: A composite index that represents the average price of all 10 commodities in the dataset, weighted by their individual market capitalizations. Prices are reported in USD per unit of measurement.\n**COFFEE**: Coffee price per unit of measurement (USD).\n**SUGAR**: Sugar price per unit of measurement (USD).\n**ALUMINUM**: Aluminum price per unit of measurement (USD).\n**BRENT**: Brent crude oil price per unit of measurement (USD).\n**CORN**: Corn price per unit of measurement (USD).\n**COPPER**: Copper price per unit of measurement (USD).\n**WHEAT**: Wheat price per unit of measurement (USD).\n\nNote that some values are missing in the dataset, represented by NaN. These missing values occur for some of the commodities in the earlier years of the dataset.\n\nIt may be useful for time series analysis and predictive modeling.\n\nNaN values were included so that you as a Data Scientist can get some practice on dealing with NaN values.\n\nhttps://www.alphavantage.co/documentation/", "VersionNotes": "Data Update 2023-05-10", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 3240761, "CreatorUserId": 11109722, "OwnerUserId": 11109722.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5657366.0, "CurrentDatasourceVersionId": 5732767.0, "ForumId": 3306003, "Type": 2, "CreationDate": "05/08/2023 22:06:14", "LastActivityDate": "05/08/2023", "TotalViews": 958, "TotalDownloads": 165, "TotalVotes": 4, "TotalKernels": 1}]
|
[{"Id": 11109722, "UserName": "richeyjay", "DisplayName": "Ganesh Jainarain", "RegisterDate": "07/21/2022", "PerformanceTier": 2}]
|
# # Making the 'Commodity Dataset w/AlphaVantage API
# This Python code imports the `load_dotenv()` function from the `dotenv` package, which is used to load environment variables from a `.env` file. It also imports the `os` and `requests` packages.
# The code loads environment variables from the `.env` file using the `load_dotenv()` function. It then retrieves the value of the `ALPHA_VANTAGE_API_KEY` variable from the environment variables using the `os.environ.get()` function.
# Next, the code makes an API request to the Alpha Vantage API using the retrieved API key and the `requests.get()` function. The response data is then parsed as a JSON object using the `.json()` method.
# Finally, the code prints the parsed JSON data to the console using the `print()` function.
# # This code defines a `Commodity` class
# This code defines a `Commodity` class that provides methods to retrieve and transform commodity market data, and to save the data to a CSV file. The class constructor takes several optional parameters, including `outputsize`, `datatype`, and `api_key`. The `outputsize` parameter determines whether the API will return daily or monthly historical data for a commodity. The `datatype` parameter specifies whether the API will return data in JSON or CSV format. If `api_key` is not provided, the constructor will use the 'demo' key provided by the API.
# The `_get_data` method sends a GET request to the Alpha Vantage API to retrieve commodity market data. The method takes two optional parameters, `function` and `interval`. `function` specifies the type of data to be retrieved, such as intraday prices or historical prices. `interval` specifies the time interval for the data, such as daily or weekly. The `_get_data` method returns a JSON object containing the requested data.
# The `_transform_data_to_df` method transforms the data returned by `_get_data` into a Pandas dataframe. The method takes the JSON object returned by `_get_data` as an input. The method converts the date column to a `datetime` format, the value column to numeric format, and sets the date column as the dataframe index.
# The `to_csv` method saves the commodity market data to a CSV file. The method takes two parameters, `data` and `file_path`. If `data` is already a Pandas dataframe, the method saves it to a CSV file specified by `file_path`. Otherwise, the method uses the `_transform_data_to_df` method to transform the data to a Pandas dataframe and then saves it to a CSV file specified by `file_path`. If an error occurs while saving the data, an error message is printed to the console.
import pandas as pd
import numpy as np
import os
import requests
api_key = os.environ.get("ALPHA_VANTAGE_API_KEY")
class Commodity:
API_BASE_URL = "https://www.alphavantage.co/query"
"""
__init__(self, outputsize='compact', datatype='json', api_key=None) -
Initializes a Commodity object with default values for outputsize,
datatype, and api_key unless otherwise specified.
"""
def __init__(self, outputsize="compact", datatype="json", api_key=None):
if api_key is None:
api_key = "demo"
self.outputsize = outputsize
self.datatype = datatype
self.api_key = api_key
def _get_data(self, function, interval="daily", datatype="json"):
params = {
"function": function,
"interval": interval,
"datatype": datatype,
"apikey": self.api_key,
}
try:
response = requests.get(self.API_BASE_URL, params=params)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
return None
return response.json()
def _transform_data_to_df(self, data):
try:
df = pd.DataFrame(data["data"], columns=["date", "value"])
df["date"] = pd.to_datetime(df["date"])
df["value"] = df["value"].replace(".", np.nan)
df["value"] = pd.to_numeric(df["value"])
df.set_index("date", inplace=True)
return df
except (KeyError, TypeError) as e:
print(f"Error occurred while transforming data to dataframe: {e}")
return None
def to_csv(self, data, file_path):
try:
if isinstance(data, pd.DataFrame):
data.to_csv(file_path)
else:
df = self._transform_data_to_df(data)
df.to_csv(file_path)
print(f"Data saved to {file_path}")
except Exception as e:
print(f"Error saving data to {file_path}: {e}")
commodity = Commodity(api_key="your_api_key_here")
data = commodity._get_data("REAL_GDP", interval="monthly", datatype="json")
print(data)
# # This code makes use of the Commodity class and the Alpha Vantage API to retrieve data
# This code makes use of the Commodity class and the Alpha Vantage API to retrieve data for various commodities. The `COMMODITY_DICT` variable stores a set of commodity names, and a loop iterates through each name in the set.
# For each commodity, the `_get_data()` method of the `Commodity` class is called with the commodity name, an interval of 'monthly', and a datatype of 'json'. The resulting data is then printed to the console.
# After retrieving the data, it is saved to a JSON file with the same name as the commodity, using the `json.dump()` method.
# The variable `calls_count` is used to keep track of the number of API calls made. If `calls_count` reaches 5, the program sleeps for 5 minutes using the `time.sleep()` function before continuing. This is to avoid hitting the Alpha Vantage API rate limit.
import time
import json
commodity = Commodity(api_key="your_api_key_here")
COMMODITY_DICT = {
"WTI",
"BRENT",
"NATURAL_GAS",
"COPPER",
"ALUMINUM",
"WHEAT",
"CORN",
"COTTON",
"SUGAR",
"COFFEE",
"ALL_COMMODITIES",
}
calls_count = 0
for commodity_name in COMMODITY_DICT:
data = commodity._get_data(commodity_name, interval="monthly", datatype="json")
print(data)
# Save the data to a JSON file
with open(f"{commodity_name}.json", "w") as f:
json.dump(data, f)
calls_count += 1
if calls_count == 5:
time.sleep(300) # sleep for 5 minutes
calls_count = 0
# # TThis code reads in JSON files containing commodity data
# This code reads in JSON files containing commodity data, converts them to pandas data frames, and creates a dictionary of data frames for each commodity.
# First, the code defines a `folder_path` variable that specifies the directory where the JSON files are located. It also creates a dictionary `commodity_files` that maps commodity names to their corresponding file names.
# Next, the code initializes an empty dictionary `dfs` to store the data frames for each commodity. Then, it loops over each commodity in the `commodity_files` dictionary, reads in the data from the corresponding JSON file using the `json.load` function, and converts it to a pandas data frame. The data frame is then cleaned by converting the `date` column to a datetime format and converting the `value` column to numeric format. Finally, the cleaned data frame is added to the `dfs` dictionary using the commodity name as the key.
# After the loop completes, the `dfs` dictionary contains a data frame for each commodity, and you can access individual data frames using their commodity names as keys. The code prints the data frame for the 'WTI' commodity as an example.
import os
import pandas as pd
# Path to the folder where the JSON files are saved
folder_path = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/JSON"
# Dictionary mapping commodity names to file names
commodity_files = {
"WTI": "WTI.json",
"BRENT": "BRENT.json",
"NATURAL_GAS": "NATURAL_GAS.json",
"COPPER": "COPPER.json",
"ALUMINUM": "ALUMINUM.json",
"WHEAT": "WHEAT.json",
"CORN": "CORN.json",
"COTTON": "COTTON.json",
"SUGAR": "SUGAR.json",
"COFFEE": "COFFEE.json",
"ALL_COMMODITIES": "ALL_COMMODITIES.json",
}
# Loop over the commodities and read in the data from the files
dfs = {}
for commodity, file_name in commodity_files.items():
file_path = os.path.join(folder_path, file_name)
with open(file_path, "r") as f:
data = json.load(f)
df = pd.DataFrame(data["data"], columns=["date", "value"])
df["date"] = pd.to_datetime(df["date"])
df["value"] = df["value"].replace(".", np.nan)
df["value"] = pd.to_numeric(df["value"])
df.set_index("date", inplace=True)
dfs[commodity] = df
# Now you have a dictionary of data frames, one for each commodity
print(dfs["WTI"])
dfs
# # This script loads all JSON files in a specified folder and creates a merged Pandas data frame from them. Each JSON file corresponds to a different commodity, so the script creates a separate column in the data frame for each commodity's price data. Here's a breakdown of what the code does:
# 1. Set the folder path where the JSON files are stored.
# 2. Create an empty list to store the data frames.
# 3. Loop through all files in the folder that end with ".json".
# 4. Load the JSON data from each file into a Pandas data frame.
# 5. Rename the "value" column to the name of the commodity (which is the file name without the ".json" extension).
# 6. Append each data frame to the list.
# 7. Merge all the data frames in the list into one data frame, using the "date" column as the join key.
# 8. Reset the index of the merged data frame and rename the columns.
# 9. Print the merged data frame.
# Overall, this script provides a useful way to merge multiple commodity price data sets into a single data frame for further analysis.
import os
import pandas as pd
# Set folder path
folder_path = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/JSON"
# Create an empty list to store the data frames
df_list = []
# Loop through all the files in the folder
for file_name in os.listdir(folder_path):
if file_name.endswith(".json"):
# Get the full file path
file_path = os.path.join(folder_path, file_name)
# Load the JSON data from the file
with open(file_path, "r") as f:
data = json.load(f)
# Create a data frame from the JSON data
df = pd.DataFrame(data["data"], columns=["date", "value"])
df["date"] = pd.to_datetime(df["date"])
df["value"] = pd.to_numeric(df["value"], errors="coerce")
df.set_index("date", inplace=True)
# Rename the value column to the commodity name
commodity = file_name[:-5]
df.rename(columns={"value": commodity}, inplace=True)
# Add the data frame to the list
df_list.append(df)
# Merge all the data frames into one
merged_df = pd.merge(df_list[0], df_list[1], on="date", how="outer")
for df in df_list[2:]:
merged_df = pd.merge(merged_df, df, on="date", how="outer")
# Reset the index and rename the columns
merged_df = merged_df.reset_index()
merged_df = merged_df.rename(columns={"index": "date"})
# Print the merged data frame
print(merged_df)
merged_df
merged_df.columns
# # Save data to a csv file
# save dataframe to csv file
merged_df.to_csv("COMMODITY_DATA.csv", index=False)
# # Create Train test split csv files
from sklearn.model_selection import train_test_split
# Assuming your dataset is stored in a pandas dataframe called `commodities_df`
# Split the data into training and testing sets
train_df, test_df = train_test_split(merged_df, test_size=0.2, random_state=42)
# Write the training set to a CSV file
train_df.to_csv("train.csv", index=False)
# Write the testing set to a CSV file
test_df.to_csv("test.csv", index=False)
# # Import packages so that we can explore data
# load packages
import sys # access to system parameters https://docs.python.org/3/library/sys.html
print("Python version: {}".format(sys.version))
import pandas as pd # collection of functions for data processing and analysis modeled after R dataframes with SQL like features
print("pandas version: {}".format(pd.__version__))
import matplotlib # collection of functions for scientific and publication-ready visualization
print("matplotlib version: {}".format(matplotlib.__version__))
import numpy as np # foundational package for scientific computing
print("NumPy version: {}".format(np.__version__))
import scipy as sp # collection of functions for scientific computing and advance mathematics
print("SciPy version: {}".format(sp.__version__))
import IPython
from IPython import display # pretty printing of dataframes in Jupyter notebook
from IPython.display import HTML, display
print("IPython version: {}".format(IPython.__version__))
import sklearn # collection of machine learning algorithms
print("scikit-learn version: {}".format(sklearn.__version__))
# misc libraries
import random
import time
# ignore warnings
import warnings
warnings.filterwarnings("ignore")
print("-" * 25)
# # Common ML Algorithms
# Common Model Algorithms
from sklearn import (
svm,
tree,
linear_model,
neighbors,
naive_bayes,
ensemble,
discriminant_analysis,
gaussian_process,
)
# Common Model Helpers
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn import feature_selection
from sklearn import model_selection
from sklearn import metrics
# Visualization
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
from pandas.plotting import scatter_matrix
# Configure Visualization Defaults
# %matplotlib inline = show plots in Jupyter Notebook browser
mpl.style.use("ggplot")
sns.set_style("white")
sns.set_style("darkgrid")
pylab.rcParams["figure.figsize"] = 12, 8
print("Seaborn version: {}".format(sns.__version__))
# # Import data into variables
# PATH = "/kaggle/input/playground-series-s3e14"
TRAIN_FILENAME = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/train.csv"
TEST_FILENAME = "/Users/richeyjay/Desktop/AlphaVantage/env/Code/test.csv"
train_data = pd.read_csv(TRAIN_FILENAME)
print(train_data.shape)
print("-" * 50)
test_data = pd.read_csv(TEST_FILENAME)
print(test_data.shape)
print("-" * 50)
# # Implementation of a Data Explorer Class so it can be easier to explore our data
class DataExplorer:
def __init__(self, data):
self.data = data
def print_info(self):
try:
print(self.data.info())
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_null_values(self):
try:
if self.data.isnull().values.any():
print("Null values found.")
null_cols = self.data.columns[self.data.isnull().any()]
for col in null_cols:
print(
f"{col} column has {self.data[col].isnull().sum()} null values."
)
else:
print("No null values found.")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_duplicate_rows(self):
try:
if self.data.duplicated().sum() > 0:
print(f"{self.data.duplicated().sum()} duplicate rows found.")
else:
print("No duplicate rows found.")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_data_shape(self):
try:
print(f"Number of rows: {self.data.shape[0]}")
print(f"Number of columns: {self.data.shape[1]}")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def describe_numeric_features(self):
try:
numeric_cols = self.data.select_dtypes(
include=["float64", "int64"]
).columns.tolist()
if not numeric_cols:
print("No numeric columns found.")
else:
print(self.data[numeric_cols].describe())
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def describe_categorical_features(self):
try:
categorical_cols = self.data.select_dtypes(
include=["object"]
).columns.tolist()
if not categorical_cols:
print("No categorical columns found.")
else:
print(self.data[categorical_cols].describe())
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_duplicates(self):
try:
print(f"Number of duplicate rows: {self.data.duplicated().sum()}")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_columns(self):
try:
print(self.data.columns)
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def check_data_types(self):
try:
print(self.data.dtypes)
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def drop_duplicates(self):
try:
self.data.drop_duplicates(inplace=True)
if self.data.duplicated().sum() == 0:
print("No duplicates found.")
else:
print("Duplicates dropped.")
except AttributeError:
print("Error: Invalid data type provided. Must be a Pandas DataFrame.")
print("-" * 50)
def drop_column(self, column):
try:
self.data.drop(column, axis=1, inplace=True)
print(f"Column {column} dropped")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
def fill_null(self, column, value):
try:
self.data[column].fillna(value, inplace=True)
print(f"Null values in {column} filled with {value}")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
def clean_data(self):
try:
self.check_null_values()
self.drop_duplicates()
if "id" in self.data.columns:
self.drop_column("id")
print("Data cleaned")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
def get_min(self):
try:
for col in self.data.columns:
print(f"Minimum value in {col}: {self.data[col].min()}")
except TypeError:
print("Columns must contain numeric values.")
except:
print("An error occurred while getting the minimum value.")
finally:
print("-" * 50)
def get_max(self):
numeric_cols = self.data.select_dtypes(
include=["float64", "int64"]
).columns.tolist()
if not numeric_cols:
print("No numeric columns found.")
else:
for col in numeric_cols:
try:
print(f"Maximum value in {col}: {self.data[col].max()}")
except TypeError:
print(f"Column {col} must contain numeric values.")
except:
print(
f"An error occurred while getting the maximum value for column {col}."
)
finally:
print("-" * 50)
train_data_explorer = DataExplorer(train_data)
train_data_explorer.check_null_values()
train_data_explorer.check_columns()
train_data_explorer.drop_column("date")
train_data_explorer.describe_numeric_features()
train_data_explorer.print_info()
train_data
# # Feature Correlation
# correlation heatmap of dataset
def correlation_heatmap(df):
_, ax = plt.subplots(figsize=(14, 12), dpi=300)
colormap = sns.diverging_palette(220, 10, as_cmap=True)
_ = sns.heatmap(
df.corr(),
cmap="icefire",
square=True,
cbar_kws={"shrink": 0.5},
ax=ax,
annot=True,
linewidths=0.1,
vmax=1.0,
linecolor="black",
annot_kws={"fontsize": 12},
)
plt.title("Pearson Correlation of Features for Training Dataset", y=1.05, size=15)
correlation_heatmap(train_data)
| false | 0 | 5,340 | 0 | 5,816 | 5,340 |
||
129063299
|
<jupyter_start><jupyter_text>GoogleNews-vectors-negative300 ( word2vec )
## word2vec
This repository hosts the word2vec pre-trained Google News corpus (3 billion running words) word vector model (3 million 300-dimension English word vectors).
Kaggle dataset identifier: googlenewsvectors
<jupyter_script>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
from sklearn.preprocessing import LabelEncoder
from gensim.models.keyedvectors import KeyedVectors
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Load the keyed vectors
word_vectors = KeyedVectors.load_word2vec_format(
"/kaggle/input/googlenewsvectors/GoogleNews-vectors-negative300.bin", binary=True
)
# Load the data
data = pd.read_csv(
"/kaggle/input/2-recommended-reads-conversion-of-data-to-num/vectorizedData.csv"
)
data = data.drop_duplicates(subset=["booktitle", "authorname"], keep="first")
# Tokenize and vectorize the first book description
tokenizer = Tokenizer()
tokenizer.fit_on_texts(data["bookdescription"])
sequences = tokenizer.texts_to_sequences(data["bookdescription"])
word_index = tokenizer.word_index
max_len = 100 # set the maximum length of the sequence
X_desc = pad_sequences(sequences, maxlen=max_len)
X_vec = np.zeros((len(data), word_vectors.vector_size))
for i, desc in enumerate(data["bookdescription"]):
words = desc.split()
vectors = [word_vectors[word] for word in words if word in word_vectors]
if vectors:
X_vec[i] = np.mean(vectors, axis=0)
# Combine the tokenized and vectorized features
X = np.concatenate((X_desc, X_vec), axis=1)
# Encoding the type column
label_encoder = LabelEncoder()
label_encoder.fit(data["type"])
y = label_encoder.transform(data["type"])
# Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Creating and training the model
model = Sequential()
model.add(Dense(512, input_dim=X_train.shape[1], activation="relu"))
model.add(Dense(216, activation="relu"))
model.add(Dense(32, activation="relu"))
model.add(Dense(len(label_encoder.classes_), activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(
X_train,
pd.get_dummies(y_train),
epochs=50,
batch_size=32,
validation_data=(X_test, pd.get_dummies(y_test)),
)
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/063/129063299.ipynb
|
googlenewsvectors
|
adarshsng
|
[{"Id": 129063299, "ScriptId": 38349096, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4842847, "CreationDate": "05/10/2023 17:15:15", "VersionNumber": 2.0, "Title": "#6 recommended reads - Deep Learning Keras", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 73.0, "LinesInsertedFromPrevious": 9.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 64.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
|
[{"Id": 184788256, "KernelVersionId": 129063299, "SourceDatasetVersionId": 2307650}]
|
[{"Id": 2307650, "DatasetId": 1391881, "DatasourceVersionId": 2349008, "CreatorUserId": 3974712, "LicenseName": "Unknown", "CreationDate": "06/06/2021 09:26:00", "VersionNumber": 1.0, "Title": "GoogleNews-vectors-negative300 ( word2vec )", "Slug": "googlenewsvectors", "Subtitle": "This repository hosts the word2vec pre-trained Google News corpus - word2vec", "Description": "## word2vec\n\nThis repository hosts the word2vec pre-trained Google News corpus (3 billion running words) word vector model (3 million 300-dimension English word vectors).", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
|
[{"Id": 1391881, "CreatorUserId": 3974712, "OwnerUserId": 3974712.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2307650.0, "CurrentDatasourceVersionId": 2349008.0, "ForumId": 1411117, "Type": 2, "CreationDate": "06/06/2021 09:26:00", "LastActivityDate": "06/06/2021", "TotalViews": 3273, "TotalDownloads": 216, "TotalVotes": 16, "TotalKernels": 12}]
|
[{"Id": 3974712, "UserName": "adarshsng", "DisplayName": "KA-KA-shi", "RegisterDate": "11/04/2019", "PerformanceTier": 2}]
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk("/kaggle/input"):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
from sklearn.preprocessing import LabelEncoder
from gensim.models.keyedvectors import KeyedVectors
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Load the keyed vectors
word_vectors = KeyedVectors.load_word2vec_format(
"/kaggle/input/googlenewsvectors/GoogleNews-vectors-negative300.bin", binary=True
)
# Load the data
data = pd.read_csv(
"/kaggle/input/2-recommended-reads-conversion-of-data-to-num/vectorizedData.csv"
)
data = data.drop_duplicates(subset=["booktitle", "authorname"], keep="first")
# Tokenize and vectorize the first book description
tokenizer = Tokenizer()
tokenizer.fit_on_texts(data["bookdescription"])
sequences = tokenizer.texts_to_sequences(data["bookdescription"])
word_index = tokenizer.word_index
max_len = 100 # set the maximum length of the sequence
X_desc = pad_sequences(sequences, maxlen=max_len)
X_vec = np.zeros((len(data), word_vectors.vector_size))
for i, desc in enumerate(data["bookdescription"]):
words = desc.split()
vectors = [word_vectors[word] for word in words if word in word_vectors]
if vectors:
X_vec[i] = np.mean(vectors, axis=0)
# Combine the tokenized and vectorized features
X = np.concatenate((X_desc, X_vec), axis=1)
# Encoding the type column
label_encoder = LabelEncoder()
label_encoder.fit(data["type"])
y = label_encoder.transform(data["type"])
# Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Creating and training the model
model = Sequential()
model.add(Dense(512, input_dim=X_train.shape[1], activation="relu"))
model.add(Dense(216, activation="relu"))
model.add(Dense(32, activation="relu"))
model.add(Dense(len(label_encoder.classes_), activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(
X_train,
pd.get_dummies(y_train),
epochs=50,
batch_size=32,
validation_data=(X_test, pd.get_dummies(y_test)),
)
| false | 1 | 846 | 0 | 923 | 846 |
||
129063049
|
<jupyter_start><jupyter_text>SMS Spam Collection Dataset
## Context
The SMS Spam Collection is a set of SMS tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged acording being ham (legitimate) or spam.
## Content
The files contain one message per line. Each line is composed by two columns: v1 contains the label (ham or spam) and v2 contains the raw text.
This corpus has been collected from free or free for research sources at the Internet:
-> A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is:[ \[Web Link\]][1].
-> A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at:[ \[Web Link\]][2].
-> A list of 450 SMS ham messages collected from Caroline Tag's PhD Thesis available at[ \[Web Link\]][3].
-> Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at:[ \[Web Link\]][4]. This corpus has been used in the following academic researches:
## Acknowledgements
The original dataset can be found [here](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection). The creators would like to note that in case you find the dataset useful, please make a reference to previous paper and the web page: http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/ in your papers, research, etc.
We offer a comprehensive study of this corpus in the following paper. This work presents a number of statistics, studies and baseline results for several machine learning methods.
Almeida, T.A., Gómez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011.
## Inspiration
* Can you use this dataset to build a prediction model that will accurately classify which texts are spam?
[1]: http://www.grumbletext.co.uk/
[2]: http://www.comp.nus.edu.sg/~rpnlpir/downloads/corpora/smsCorpus/
[3]: http://etheses.bham.ac.uk/253/1/Tagg09PhD.pdf
[4]: http://www.esp.uem.es/jmgomez/smsspamcorpus/
Kaggle dataset identifier: sms-spam-collection-dataset
<jupyter_script># !pip install seaborn
# !pip install sklearn
# !pip install --user scikit-learn
# !pip install matplotlib
# !pip install nltk
import pandas as pd
import numpy as np
import string
import seaborn as sns
import matplotlib.pyplot as plt
from wordcloud import WordCloud
from sklearn.decomposition import PCA
import matplotlib
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from collections import Counter
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
data = pd.read_csv("../input/sms-spam-collection-dataset/spam.csv", encoding="latin-1")
data.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1, inplace=True)
data.columns = ["class", "message"]
data.head(10)
data.shape
data["length"] = [len(d) for d in data.message]
data.head()
data.groupby("class").describe()
sns.countplot(x=data["class"])
sns.barplot(x=data["class"], y=data["length"])
data.drop(["length"], axis=1, inplace=True)
# Tạo hai biểu đồ WordCloud, một cho email spam và một cho email hợp lệ
spam_text = " ".join(data[data["class"] == "spam"]["message"])
spam_wordcloud = WordCloud(width=800, height=400, background_color="white").generate(
spam_text
)
ham_text = " ".join(data[data["class"] == "ham"]["message"])
ham_wordcloud = WordCloud(width=800, height=400, background_color="white").generate(
ham_text
)
# Hiển thị biểu đồ
fig, ax = plt.subplots(1, 2, figsize=(20, 15))
ax[0].imshow(spam_wordcloud)
ax[0].set_title("Email spam")
ax[0].axis("off")
ax[1].imshow(ham_wordcloud)
ax[1].set_title("Email hợp lệ")
ax[1].axis("off")
plt.show()
data.head()
def transform_message(message):
message_not_punc = [] # Message without punctuation
i = 0
for punctuation in message:
if punctuation not in string.punctuation:
message_not_punc.append(punctuation)
# Join words again to form the string.
message_not_punc = "".join(message_not_punc)
# Remove any stopwords for message_not_punc, but first we should
# to transform this into the list.
message_clean = list(message_not_punc.split(" "))
while i <= len(message_clean):
for mess in message_clean:
if mess.lower() in stopwords.words("english"):
message_clean.remove(mess)
i = i + 1
return message_clean
data["message"].apply(transform_message)
vectorization = CountVectorizer(analyzer=transform_message)
X = vectorization.fit(data["message"])
X_transform = X.transform(data["message"])
print(X_transform)
tfidf_transformer = TfidfTransformer().fit(X_transform)
X_tfidf = tfidf_transformer.transform(X_transform)
print(X_tfidf.shape)
X_train, X_test, y_train, y_test = train_test_split(
data["message"],
data["class"],
test_size=0.3,
random_state=50,
stratify=data["class"],
)
X_train, y_train
bow = CountVectorizer(stop_words="english")
# Fit the bag of words on the training docs
bow.fit(X_train)
X_train = bow.transform(X_train)
X_test = bow.transform(X_test)
X_train, X_test
# ****Naive Bayes Classifier
naive_bayes = MultinomialNB()
naive_bayes.fit(X_train, y_train)
print(f"Accuracy : {accuracy_score(y_test, naive_bayes.predict(X_test)):.3f}")
print(
f'Precision : {precision_score(y_test, naive_bayes.predict(X_test), pos_label="spam"):.3f}'
)
print(
f'Recall : {recall_score(y_test, naive_bayes.predict(X_test), pos_label="spam"):.3f}'
)
print(
f'F1-Score : {f1_score(y_test, naive_bayes.predict(X_test), pos_label="spam"):.3f}'
)
predictions = naive_bayes.predict(X_test)
print(predictions)
print(classification_report(y_test, predictions))
# ****Support Vector Machine Classifier
svm_clf = SVC(kernel="linear").fit(X_train, y_train)
print(f"Accuracy : {accuracy_score(y_test, svm_clf.predict(X_test)):.3f}")
print(
f'Precision : {precision_score(y_test, svm_clf.predict(X_test), pos_label="spam"):.3f}'
)
print(f'Recall : {recall_score(y_test, svm_clf.predict(X_test), pos_label="spam"):.3f}')
print(f'F1-Score : {f1_score(y_test, svm_clf.predict(X_test), pos_label="spam"):.3f}')
svm_predictions = svm_clf.predict(X_test)
print(classification_report(y_test, svm_predictions))
X = bow.fit_transform(data["message"])
# Áp dụng PCA để giảm chiều ma trận từ TF-IDF xuống còn 2 chiều
pca = PCA(n_components=2)
X_2d = pca.fit_transform(X.toarray())
# Vẽ biểu đồ hai chiều của các email
colors = ["red", "green"]
plt.scatter(
X_2d[:, 0],
X_2d[:, 1],
c=data["class"].apply(lambda x: 0 if x == "ham" else 1),
cmap=matplotlib.colors.ListedColormap(colors),
)
plt.show()
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import PCA
from sklearn.svm import SVC
import matplotlib.pyplot as plt
# Huấn luyện mô hình SVM bằng dữ liệu 2 chiều
y = data["class"]
clf = svm_clf
clf.fit(X_2d, y)
# Vẽ biểu đồ hai chiều của các email
colors = ["red", "green"]
plt.scatter(
X_2d[:, 0],
X_2d[:, 1],
c=data["class"].apply(lambda x: 0 if x == "ham" else 1),
cmap=matplotlib.colors.ListedColormap(colors),
)
# Tìm phương trình hyperplane và hệ số của nó
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(X_2d[:, 0].min(), X_2d[:, 0].max())
yy = a * xx - (clf.intercept_[0]) / w[1]
# Vẽ đường SVM đóng vai trò là hyperplane
plt.plot(xx, yy, "k-")
plt.xlabel("Component 1")
plt.ylabel("Component 2")
plt.ylim(top=6, bottom=-1)
plt.xlim([-2, 26])
plt.title("PCA Visualization of Spam Classification with SVM")
plt.show()
# Thêm câu vào dữ liệu và biểu diễn câu đó trên biểu đồ
clf.fit(X_2d, y)
new_text = "Free entry call txt now"
new_2d = pca.transform(bow.transform([new_text]).toarray())
if clf.predict(new_2d)[0] == "ham":
color = "blue"
else:
color = "orange"
# Tạo danh sách mới chứa cả điểm mới và các điểm trong tập dữ liệu
new_X_2d = np.vstack((X_2d, new_2d))
new_y = pd.concat([y, pd.Series(["test"])])
# Vẽ biểu đồ scatter plot cho mỗi loại điểm
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.scatter(
X_2d[:, 0],
X_2d[:, 1],
c=y.apply(lambda x: 0 if x == "ham" else 1),
cmap=matplotlib.colors.ListedColormap(["red", "green"]),
label="Data points",
)
ax.scatter(new_2d[:, 0], new_2d[:, 1], c=color, s=100, label="New point")
ax.plot(xx, yy, "k-", label="SVM")
# Đặt nhãn cho trục x và trục y, và hiển thị biểu đồ
# plt.legend()
plt.xlabel("Component 1")
plt.ylabel("Component 2")
plt.ylim(top=6, bottom=-1)
plt.xlim([-2, 26])
plt.title("PCA Visualization of Spam Classification with SVM")
plt.show()
svm_clf.fit(X, y)
# Để kiểm tra xem một câu là spam hay ham, ta sử dụng transform() để chuyển đổi câu đó thành ma trận TF-IDF.
# new_text = "free call txt won now"
new_X = bow.transform([new_text]).toarray()
# Sử dụng mô hình SVM đã huấn luyện để dự đoán xem câu này là spam hay ham
if svm_clf.predict(new_X)[0] == "ham":
print("Đây là tin nhắn hợp lệ (ham)")
else:
print("Đây là tin nhắn rác (spam)")
data["message"][2]
|
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/063/129063049.ipynb
|
sms-spam-collection-dataset
| null |
[{"Id": 129063049, "ScriptId": 38365203, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11328944, "CreationDate": "05/10/2023 17:12:48", "VersionNumber": 1.0, "Title": "Spam message detection using SVM and Naive Bayes", "EvaluationDate": "05/10/2023", "IsChange": true, "TotalLines": 241.0, "LinesInsertedFromPrevious": 126.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 115.0, "LinesInsertedFromFork": 126.0, "LinesDeletedFromFork": 15.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 115.0, "TotalVotes": 0}]
|
[{"Id": 184787657, "KernelVersionId": 129063049, "SourceDatasetVersionId": 982}]
|
[{"Id": 982, "DatasetId": 483, "DatasourceVersionId": 982, "CreatorUserId": 395512, "LicenseName": "Unknown", "CreationDate": "12/02/2016 19:29:17", "VersionNumber": 1.0, "Title": "SMS Spam Collection Dataset", "Slug": "sms-spam-collection-dataset", "Subtitle": "Collection of SMS messages tagged as spam or legitimate", "Description": "## Context \n\nThe SMS Spam Collection is a set of SMS tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged acording being ham (legitimate) or spam. \n\n## Content\n\nThe files contain one message per line. Each line is composed by two columns: v1 contains the label (ham or spam) and v2 contains the raw text. \n\nThis corpus has been collected from free or free for research sources at the Internet: \n\n-> A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is:[ \\[Web Link\\]][1]. \n-> A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at:[ \\[Web Link\\]][2]. \n-> A list of 450 SMS ham messages collected from Caroline Tag's PhD Thesis available at[ \\[Web Link\\]][3]. \n-> Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at:[ \\[Web Link\\]][4]. This corpus has been used in the following academic researches: \n\n\n## Acknowledgements\n\nThe original dataset can be found [here](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection). The creators would like to note that in case you find the dataset useful, please make a reference to previous paper and the web page: http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/ in your papers, research, etc.\n\nWe offer a comprehensive study of this corpus in the following paper. This work presents a number of statistics, studies and baseline results for several machine learning methods. \n\nAlmeida, T.A., G\u00c3\u00b3mez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011.\n\n## Inspiration\n\n* Can you use this dataset to build a prediction model that will accurately classify which texts are spam?\n\n\n [1]: http://www.grumbletext.co.uk/\n [2]: http://www.comp.nus.edu.sg/~rpnlpir/downloads/corpora/smsCorpus/\n [3]: http://etheses.bham.ac.uk/253/1/Tagg09PhD.pdf\n [4]: http://www.esp.uem.es/jmgomez/smsspamcorpus/", "VersionNotes": "Initial release", "TotalCompressedBytes": 503663.0, "TotalUncompressedBytes": 503663.0}]
|
[{"Id": 483, "CreatorUserId": 395512, "OwnerUserId": NaN, "OwnerOrganizationId": 7.0, "CurrentDatasetVersionId": 982.0, "CurrentDatasourceVersionId": 982.0, "ForumId": 2105, "Type": 2, "CreationDate": "12/02/2016 19:29:17", "LastActivityDate": "02/06/2018", "TotalViews": 706358, "TotalDownloads": 151837, "TotalVotes": 1257, "TotalKernels": 1007}]
| null |
# !pip install seaborn
# !pip install sklearn
# !pip install --user scikit-learn
# !pip install matplotlib
# !pip install nltk
import pandas as pd
import numpy as np
import string
import seaborn as sns
import matplotlib.pyplot as plt
from wordcloud import WordCloud
from sklearn.decomposition import PCA
import matplotlib
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from collections import Counter
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
data = pd.read_csv("../input/sms-spam-collection-dataset/spam.csv", encoding="latin-1")
data.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1, inplace=True)
data.columns = ["class", "message"]
data.head(10)
data.shape
data["length"] = [len(d) for d in data.message]
data.head()
data.groupby("class").describe()
sns.countplot(x=data["class"])
sns.barplot(x=data["class"], y=data["length"])
data.drop(["length"], axis=1, inplace=True)
# Tạo hai biểu đồ WordCloud, một cho email spam và một cho email hợp lệ
spam_text = " ".join(data[data["class"] == "spam"]["message"])
spam_wordcloud = WordCloud(width=800, height=400, background_color="white").generate(
spam_text
)
ham_text = " ".join(data[data["class"] == "ham"]["message"])
ham_wordcloud = WordCloud(width=800, height=400, background_color="white").generate(
ham_text
)
# Hiển thị biểu đồ
fig, ax = plt.subplots(1, 2, figsize=(20, 15))
ax[0].imshow(spam_wordcloud)
ax[0].set_title("Email spam")
ax[0].axis("off")
ax[1].imshow(ham_wordcloud)
ax[1].set_title("Email hợp lệ")
ax[1].axis("off")
plt.show()
data.head()
def transform_message(message):
message_not_punc = [] # Message without punctuation
i = 0
for punctuation in message:
if punctuation not in string.punctuation:
message_not_punc.append(punctuation)
# Join words again to form the string.
message_not_punc = "".join(message_not_punc)
# Remove any stopwords for message_not_punc, but first we should
# to transform this into the list.
message_clean = list(message_not_punc.split(" "))
while i <= len(message_clean):
for mess in message_clean:
if mess.lower() in stopwords.words("english"):
message_clean.remove(mess)
i = i + 1
return message_clean
data["message"].apply(transform_message)
vectorization = CountVectorizer(analyzer=transform_message)
X = vectorization.fit(data["message"])
X_transform = X.transform(data["message"])
print(X_transform)
tfidf_transformer = TfidfTransformer().fit(X_transform)
X_tfidf = tfidf_transformer.transform(X_transform)
print(X_tfidf.shape)
X_train, X_test, y_train, y_test = train_test_split(
data["message"],
data["class"],
test_size=0.3,
random_state=50,
stratify=data["class"],
)
X_train, y_train
bow = CountVectorizer(stop_words="english")
# Fit the bag of words on the training docs
bow.fit(X_train)
X_train = bow.transform(X_train)
X_test = bow.transform(X_test)
X_train, X_test
# ****Naive Bayes Classifier
naive_bayes = MultinomialNB()
naive_bayes.fit(X_train, y_train)
print(f"Accuracy : {accuracy_score(y_test, naive_bayes.predict(X_test)):.3f}")
print(
f'Precision : {precision_score(y_test, naive_bayes.predict(X_test), pos_label="spam"):.3f}'
)
print(
f'Recall : {recall_score(y_test, naive_bayes.predict(X_test), pos_label="spam"):.3f}'
)
print(
f'F1-Score : {f1_score(y_test, naive_bayes.predict(X_test), pos_label="spam"):.3f}'
)
predictions = naive_bayes.predict(X_test)
print(predictions)
print(classification_report(y_test, predictions))
# ****Support Vector Machine Classifier
svm_clf = SVC(kernel="linear").fit(X_train, y_train)
print(f"Accuracy : {accuracy_score(y_test, svm_clf.predict(X_test)):.3f}")
print(
f'Precision : {precision_score(y_test, svm_clf.predict(X_test), pos_label="spam"):.3f}'
)
print(f'Recall : {recall_score(y_test, svm_clf.predict(X_test), pos_label="spam"):.3f}')
print(f'F1-Score : {f1_score(y_test, svm_clf.predict(X_test), pos_label="spam"):.3f}')
svm_predictions = svm_clf.predict(X_test)
print(classification_report(y_test, svm_predictions))
X = bow.fit_transform(data["message"])
# Áp dụng PCA để giảm chiều ma trận từ TF-IDF xuống còn 2 chiều
pca = PCA(n_components=2)
X_2d = pca.fit_transform(X.toarray())
# Vẽ biểu đồ hai chiều của các email
colors = ["red", "green"]
plt.scatter(
X_2d[:, 0],
X_2d[:, 1],
c=data["class"].apply(lambda x: 0 if x == "ham" else 1),
cmap=matplotlib.colors.ListedColormap(colors),
)
plt.show()
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import PCA
from sklearn.svm import SVC
import matplotlib.pyplot as plt
# Huấn luyện mô hình SVM bằng dữ liệu 2 chiều
y = data["class"]
clf = svm_clf
clf.fit(X_2d, y)
# Vẽ biểu đồ hai chiều của các email
colors = ["red", "green"]
plt.scatter(
X_2d[:, 0],
X_2d[:, 1],
c=data["class"].apply(lambda x: 0 if x == "ham" else 1),
cmap=matplotlib.colors.ListedColormap(colors),
)
# Tìm phương trình hyperplane và hệ số của nó
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(X_2d[:, 0].min(), X_2d[:, 0].max())
yy = a * xx - (clf.intercept_[0]) / w[1]
# Vẽ đường SVM đóng vai trò là hyperplane
plt.plot(xx, yy, "k-")
plt.xlabel("Component 1")
plt.ylabel("Component 2")
plt.ylim(top=6, bottom=-1)
plt.xlim([-2, 26])
plt.title("PCA Visualization of Spam Classification with SVM")
plt.show()
# Thêm câu vào dữ liệu và biểu diễn câu đó trên biểu đồ
clf.fit(X_2d, y)
new_text = "Free entry call txt now"
new_2d = pca.transform(bow.transform([new_text]).toarray())
if clf.predict(new_2d)[0] == "ham":
color = "blue"
else:
color = "orange"
# Tạo danh sách mới chứa cả điểm mới và các điểm trong tập dữ liệu
new_X_2d = np.vstack((X_2d, new_2d))
new_y = pd.concat([y, pd.Series(["test"])])
# Vẽ biểu đồ scatter plot cho mỗi loại điểm
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.scatter(
X_2d[:, 0],
X_2d[:, 1],
c=y.apply(lambda x: 0 if x == "ham" else 1),
cmap=matplotlib.colors.ListedColormap(["red", "green"]),
label="Data points",
)
ax.scatter(new_2d[:, 0], new_2d[:, 1], c=color, s=100, label="New point")
ax.plot(xx, yy, "k-", label="SVM")
# Đặt nhãn cho trục x và trục y, và hiển thị biểu đồ
# plt.legend()
plt.xlabel("Component 1")
plt.ylabel("Component 2")
plt.ylim(top=6, bottom=-1)
plt.xlim([-2, 26])
plt.title("PCA Visualization of Spam Classification with SVM")
plt.show()
svm_clf.fit(X, y)
# Để kiểm tra xem một câu là spam hay ham, ta sử dụng transform() để chuyển đổi câu đó thành ma trận TF-IDF.
# new_text = "free call txt won now"
new_X = bow.transform([new_text]).toarray()
# Sử dụng mô hình SVM đã huấn luyện để dự đoán xem câu này là spam hay ham
if svm_clf.predict(new_X)[0] == "ham":
print("Đây là tin nhắn hợp lệ (ham)")
else:
print("Đây là tin nhắn rác (spam)")
data["message"][2]
| false | 0 | 2,712 | 0 | 3,532 | 2,712 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.