file_id
stringlengths
5
9
content
stringlengths
100
5.25M
local_path
stringlengths
66
70
kaggle_dataset_name
stringlengths
3
50
kaggle_dataset_owner
stringlengths
3
20
kversion
stringlengths
497
763
kversion_datasetsources
stringlengths
71
5.46k
dataset_versions
stringlengths
338
235k
datasets
stringlengths
334
371
users
stringlengths
111
264
script
stringlengths
100
5.25M
df_info
stringlengths
0
4.87M
has_data_info
bool
2 classes
nb_filenames
int64
0
370
retreived_data_description
stringlengths
0
4.44M
script_nb_tokens
int64
25
663k
upvotes
int64
0
1.65k
tokens_description
int64
25
663k
tokens_script
int64
25
663k
129534551
<jupyter_start><jupyter_text>Car Price Prediction Multiple Linear Regression ### Problem Statement A Chinese automobile company Geely Auto aspires to enter the US market by setting up their manufacturing unit there and producing cars locally to give competition to their US and European counterparts. They have contracted an automobile consulting company to understand the factors on which the pricing of cars depends. Specifically, they want to understand the factors affecting the pricing of cars in the American market, since those may be very different from the Chinese market. The company wants to know: Which variables are significant in predicting the price of a car How well those variables describe the price of a car Based on various market surveys, the consulting firm has gathered a large data set of different types of cars across the America market. ### Business Goal We are required to model the price of cars with the available independent variables. It will be used by the management to understand how exactly the prices vary with the independent variables. They can accordingly manipulate the design of the cars, the business strategy etc. to meet certain price levels. Further, the model will be a good way for management to understand the pricing dynamics of a new market. ### Please Note : The dataset provided is for learning purpose. Please don’t draw any inference with real world scenario. Kaggle dataset identifier: car-price-prediction <jupyter_code>import pandas as pd df = pd.read_csv('car-price-prediction/CarPrice_Assignment.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 205 entries, 0 to 204 Data columns (total 26 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 car_ID 205 non-null int64 1 symboling 205 non-null int64 2 CarName 205 non-null object 3 fueltype 205 non-null object 4 aspiration 205 non-null object 5 doornumber 205 non-null object 6 carbody 205 non-null object 7 drivewheel 205 non-null object 8 enginelocation 205 non-null object 9 wheelbase 205 non-null float64 10 carlength 205 non-null float64 11 carwidth 205 non-null float64 12 carheight 205 non-null float64 13 curbweight 205 non-null int64 14 enginetype 205 non-null object 15 cylindernumber 205 non-null object 16 enginesize 205 non-null int64 17 fuelsystem 205 non-null object 18 boreratio 205 non-null float64 19 stroke 205 non-null float64 20 compressionratio 205 non-null float64 21 horsepower 205 non-null int64 22 peakrpm 205 non-null int64 23 citympg 205 non-null int64 24 highwaympg 205 non-null int64 25 price 205 non-null float64 dtypes: float64(8), int64(8), object(10) memory usage: 41.8+ KB <jupyter_text>Examples: { "car_ID": 1, "symboling": 3, "CarName": "alfa-romero giulia", "fueltype": "gas", "aspiration": "std", "doornumber": "two", "carbody": "convertible", "drivewheel": "rwd", "enginelocation": "front", "wheelbase": 88.6, "carlength": 168.8, "carwidth": 64.1, "carheight": 48.8, "curbweight": 2548, "enginetype": "dohc", "cylindernumber": "four", "enginesize": 130, "fuelsystem": "mpfi", "boreratio": 3.47, "stroke": 2.68, "...": "and 6 more columns" } { "car_ID": 2, "symboling": 3, "CarName": "alfa-romero stelvio", "fueltype": "gas", "aspiration": "std", "doornumber": "two", "carbody": "convertible", "drivewheel": "rwd", "enginelocation": "front", "wheelbase": 88.6, "carlength": 168.8, "carwidth": 64.1, "carheight": 48.8, "curbweight": 2548, "enginetype": "dohc", "cylindernumber": "four", "enginesize": 130, "fuelsystem": "mpfi", "boreratio": 3.47, "stroke": 2.68, "...": "and 6 more columns" } { "car_ID": 3, "symboling": 1, "CarName": "alfa-romero Quadrifoglio", "fueltype": "gas", "aspiration": "std", "doornumber": "two", "carbody": "hatchback", "drivewheel": "rwd", "enginelocation": "front", "wheelbase": 94.5, "carlength": 171.2, "carwidth": 65.5, "carheight": 52.4, "curbweight": 2823, "enginetype": "ohcv", "cylindernumber": "six", "enginesize": 152, "fuelsystem": "mpfi", "boreratio": 2.68, "stroke": 3.47, "...": "and 6 more columns" } { "car_ID": 4, "symboling": 2, "CarName": "audi 100 ls", "fueltype": "gas", "aspiration": "std", "doornumber": "four", "carbody": "sedan", "drivewheel": "fwd", "enginelocation": "front", "wheelbase": 99.8, "carlength": 176.6, "carwidth": 66.2, "carheight": 54.3, "curbweight": 2337, "enginetype": "ohc", "cylindernumber": "four", "enginesize": 109, "fuelsystem": "mpfi", "boreratio": 3.19, "stroke": 3.4, "...": "and 6 more columns" } <jupyter_script># # Import Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats import seaborn as sns import pylab import warnings warnings.filterwarnings("ignore") sns.set(style="darkgrid", font_scale=1.5) pd.set_option("display.max.columns", None) pd.set_option("display.max.rows", None) from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import ( RandomForestRegressor, GradientBoostingRegressor, AdaBoostRegressor, ) from xgboost import XGBRegressor from catboost import CatBoostRegressor from lightgbm import LGBMRegressor from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score from sklearn.preprocessing import StandardScaler # # Loading Datasets df = pd.read_csv("/kaggle/input/car-price-prediction/CarPrice_Assignment.csv") df.head() # # EDA df.shape df.info() df.columns df.describe().T df.isna().sum() df.duplicated().sum() df.nunique() df.select_dtypes(include="object").head() df.select_dtypes(include=["int", "float"]).head() Company_Name = df["CarName"].apply(lambda x: x.split(" ")[0]) df.insert(2, "CompanyName", Company_Name) df.drop(columns=["CarName"], inplace=True) df.head() df["CompanyName"].nunique() df["CompanyName"].unique() def replace(a, b): df["CompanyName"].replace(a, b, inplace=True) replace("maxda", "mazda") replace("porcshce", "porsche") replace("toyouta", "toyota") replace("vokswagen", "volkswagen") replace("vw", "volkswagen") df["CompanyName"].unique() sns.pairplot(df, hue="price") plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) sns.distplot(df["price"], color="red", kde=True) plt.title("Car Price Distribution", fontweight="black", pad=20, fontsize=20) plt.subplot(1, 2, 2) sns.boxplot(y=df["price"], palette="Set2") plt.title("Car Price Spread", fontweight="black", pad=20, fontsize=20) plt.tight_layout() plt.show() df["price"].agg(["min", "mean", "median", "max", "std", "skew"]).to_frame().T plt.figure(figsize=(14, 6)) counts = df["CompanyName"].value_counts() sns.barplot(x=counts.index, y=counts.values) plt.xlabel("Car Company") plt.ylabel("Total No. of cars sold") plt.title("Total Cars produced by Companies", pad=20, fontweight="black", fontsize=20) plt.xticks(rotation=90) plt.show() df[df["CompanyName"] == "mercury"] df[df["CompanyName"] == "Nissan"] df[df["CompanyName"] == "renault"] plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) sns.boxplot(x="CompanyName", y="price", data=df) plt.xticks(rotation=90) plt.title("Car Company vs Price", pad=10, fontweight="black", fontsize=20) plt.subplot(1, 2, 2) x = pd.DataFrame(df.groupby("CompanyName")["price"].mean().sort_values(ascending=False)) sns.barplot(x=x.index, y="price", data=x) plt.xticks(rotation=90) plt.title("Car Company vs Average Price", pad=10, fontweight="black", fontsize=20) plt.tight_layout() plt.show() df[df["CompanyName"] == "mercury"] df[df["CompanyName"] == "Nissan"] df[df["CompanyName"] == "renault"] # # Visualizing Car Fuel Type Feature def categorical_visualization(cols): plt.figure(figsize=(20, 8)) plt.subplot(1, 3, 1) sns.countplot(x=cols, data=df, palette="Set2", order=df[cols].value_counts().index) plt.title(f"{cols} Distribution", pad=10, fontweight="black", fontsize=18) plt.xticks(rotation=90) plt.subplot(1, 3, 2) sns.boxplot(x=cols, y="price", data=df, palette="Set2") plt.title(f"{cols} vs Price", pad=20, fontweight="black", fontsize=18) plt.xticks(rotation=90) plt.subplot(1, 3, 3) x = pd.DataFrame(df.groupby(cols)["price"].mean().sort_values(ascending=False)) sns.barplot(x=x.index, y="price", data=x, palette="Set2") plt.title(f"{cols} vs Average Price", pad=20, fontweight="black", fontsize=18) plt.xticks(rotation=90) plt.tight_layout() plt.show() categorical_visualization("fueltype") # # Visualizing Aspiration Feature categorical_visualization("aspiration") # # Visualizing Door Nubmer Feature categorical_visualization("doornumber") # --- # # Visualizing Car Body Type Feature categorical_visualization("carbody") # # Visualizing Drive Wheel Feature categorical_visualization("drivewheel") # # Visualizing Engine Location Feature categorical_visualization("enginelocation") df[df["enginelocation"] == "rear"] # # Visualizing Engine Type Feature categorical_visualization("enginetype") df[df["enginetype"] == "dohcv"] df[df["enginetype"] == "rotor"] # # Visualizing Cyclinder Number Feature categorical_visualization("cylindernumber") df[df["cylindernumber"] == "three"] df[df["cylindernumber"] == "twelve"] # # Visualizing Fuel System Feature categorical_visualization("fuelsystem") df[df["fuelsystem"] == "mfi"] df[df["fuelsystem"] == "spfi"] # # Visualizing Symboling Feature categorical_visualization("symboling") # # Visualizing "CarLength", "CarWidth","Carheight" Features w.r.t "Price" def scatter_plot(cols): x = 1 plt.figure(figsize=(15, 6)) for col in cols: plt.subplot(1, 3, x) sns.scatterplot(x=col, y="price", data=df, color="blue") plt.title(f"{col} vs Price", fontweight="black", fontsize=20, pad=10) plt.tight_layout() x += 1 scatter_plot(["carlength", "carwidth", "carheight"]) # # Visualizing "EngineSize","Boreratio" & "Stroke" Features scatter_plot(["enginesize", "boreratio", "stroke"]) # # Visualizing "Compreessionratio","Horesepower" & "Peakrpm" Features scatter_plot(["compressionratio", "horsepower", "peakrpm"]) # # Visualizing "WheelBase" & "Curbweight" Features def scatter_plot(cols): x = 1 plt.figure(figsize=(15, 6)) for col in cols: plt.subplot(1, 2, x) sns.scatterplot(x=col, y="price", data=df, color="blue") plt.title(f"{col} vs Price", fontweight="black", fontsize=20, pad=10) plt.tight_layout() x += 1 scatter_plot(["wheelbase", "curbweight"]) # # Visualizing "Citympg" & "Highwaympg" Features scatter_plot(["citympg", "highwaympg"]) # # Feature Engineering z = round(df.groupby(["CompanyName"])["price"].agg(["mean"]), 2).T z df = df.merge(z.T, how="left", on="CompanyName") bins = [0, 10000, 20000, 40000] cars_bin = ["Budget", "Medium", "Highend"] df["CarsRange"] = pd.cut(df["mean"], bins, right=False, labels=cars_bin) df.head() # # # Data Preprocessiong # new_df = df[ [ "fueltype", "aspiration", "doornumber", "carbody", "drivewheel", "enginetype", "cylindernumber", "fuelsystem", "wheelbase", "carlength", "carwidth", "curbweight", "enginesize", "boreratio", "horsepower", "citympg", "highwaympg", "price", "CarsRange", ] ] new_df.head() new_df = pd.get_dummies( columns=[ "fueltype", "aspiration", "doornumber", "carbody", "drivewheel", "enginetype", "cylindernumber", "fuelsystem", "CarsRange", ], data=new_df, ) new_df.head() # # Feature Scaling of Numerical Data scaler = StandardScaler() num_cols = [ "wheelbase", "carlength", "carwidth", "curbweight", "enginesize", "boreratio", "horsepower", "citympg", "highwaympg", ] new_df[num_cols] = scaler.fit_transform(new_df[num_cols]) new_df.head() # # Selecting Features & Labels for Model Training & Testing x = new_df.drop(columns=["price"]) y = new_df["price"] x.shape y.shape # # Splitting Data for Model Traning & Testing x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.33, random_state=42 ) print("x_train - > ", x_train.shape) print("x_test - > ", x_test.shape) print("y_train - > ", y_train.shape) print("y_test - > ", y_test.shape) # # # Model Building training_score = [] testing_score = [] def model_prediction(model): model.fit(x_train, y_train) x_train_pred = model.predict(x_train) x_test_pred = model.predict(x_test) a = r2_score(y_train, x_train_pred) * 100 b = r2_score(y_test, x_test_pred) * 100 training_score.append(a) testing_score.append(b) print(f"r2_Score of {model} model on Training Data is:", a) print(f"r2_Score of {model} model on Testing Data is:", b) # # Linear-Regression Model model_prediction(LinearRegression()) # # Decision-Tree-Regressor Model model_prediction(DecisionTreeRegressor()) # # Random-Forest-Regressor Model model_prediction(RandomForestRegressor()) # # Ada-Boost-Regressor Model model_prediction(AdaBoostRegressor()) # # Gradient-Boosting-Regressor Model model_prediction(GradientBoostingRegressor()) # # LGMB Regressor Model model_prediction(LGBMRegressor()) # # XGBRegressor Model model_prediction(XGBRegressor()) # # Cat-Boost-Regressor Model model_prediction(CatBoostRegressor(verbose=False)) # # # All Model Performance Comparison models = [ "Linear Regression", "Decision Tree", "Random Forest", "Ada Boost", "Gradient Boost", "LGBM", "XGBoost", "CatBoost", ] df = pd.DataFrame( { "Algorithms": models, "Training Score": training_score, "Testing Score": testing_score, } ) df # # Plotting above results using column-bar chart df.plot( x="Algorithms", y=["Training Score", "Testing Score"], figsize=(16, 6), kind="bar", title="Performance Visualization of Different Models", colormap="Set1", ) plt.show()
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/534/129534551.ipynb
car-price-prediction
hellbuoy
[{"Id": 129534551, "ScriptId": 37947892, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 7511659, "CreationDate": "05/14/2023 15:52:15", "VersionNumber": 1.0, "Title": "Car Price Prediction", "EvaluationDate": "05/14/2023", "IsChange": true, "TotalLines": 363.0, "LinesInsertedFromPrevious": 56.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 307.0, "LinesInsertedFromFork": 56.0, "LinesDeletedFromFork": 546.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 307.0, "TotalVotes": 0}]
[{"Id": 185692467, "KernelVersionId": 129534551, "SourceDatasetVersionId": 741735}]
[{"Id": 741735, "DatasetId": 383055, "DatasourceVersionId": 762363, "CreatorUserId": 2318606, "LicenseName": "Unknown", "CreationDate": "10/15/2019 16:45:27", "VersionNumber": 1.0, "Title": "Car Price Prediction Multiple Linear Regression", "Slug": "car-price-prediction", "Subtitle": "Predicting the Prices of cars using RFE and VIF", "Description": "### Problem Statement\n\nA Chinese automobile company Geely Auto aspires to enter the US market by setting up their manufacturing unit there and producing cars locally to give competition to their US and European counterparts. \n\n \n\nThey have contracted an automobile consulting company to understand the factors on which the pricing of cars depends. Specifically, they want to understand the factors affecting the pricing of cars in the American market, since those may be very different from the Chinese market. The company wants to know:\n\nWhich variables are significant in predicting the price of a car\nHow well those variables describe the price of a car\nBased on various market surveys, the consulting firm has gathered a large data set of different types of cars across the America market. \n\n\n### Business Goal\n\nWe are required to model the price of cars with the available independent variables. It will be used by the management to understand how exactly the prices vary with the independent variables. They can accordingly manipulate the design of the cars, the business strategy etc. to meet certain price levels. Further, the model will be a good way for management to understand the pricing dynamics of a new market. \n\n### Please Note : The dataset provided is for learning purpose. Please don\u2019t draw any inference with real world scenario.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 383055, "CreatorUserId": 2318606, "OwnerUserId": 2318606.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 741735.0, "CurrentDatasourceVersionId": 762363.0, "ForumId": 395004, "Type": 2, "CreationDate": "10/15/2019 16:45:27", "LastActivityDate": "10/15/2019", "TotalViews": 339360, "TotalDownloads": 50133, "TotalVotes": 491, "TotalKernels": 345}]
[{"Id": 2318606, "UserName": "hellbuoy", "DisplayName": "Manish Kumar", "RegisterDate": "10/03/2018", "PerformanceTier": 2}]
# # Import Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats import seaborn as sns import pylab import warnings warnings.filterwarnings("ignore") sns.set(style="darkgrid", font_scale=1.5) pd.set_option("display.max.columns", None) pd.set_option("display.max.rows", None) from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import ( RandomForestRegressor, GradientBoostingRegressor, AdaBoostRegressor, ) from xgboost import XGBRegressor from catboost import CatBoostRegressor from lightgbm import LGBMRegressor from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score from sklearn.preprocessing import StandardScaler # # Loading Datasets df = pd.read_csv("/kaggle/input/car-price-prediction/CarPrice_Assignment.csv") df.head() # # EDA df.shape df.info() df.columns df.describe().T df.isna().sum() df.duplicated().sum() df.nunique() df.select_dtypes(include="object").head() df.select_dtypes(include=["int", "float"]).head() Company_Name = df["CarName"].apply(lambda x: x.split(" ")[0]) df.insert(2, "CompanyName", Company_Name) df.drop(columns=["CarName"], inplace=True) df.head() df["CompanyName"].nunique() df["CompanyName"].unique() def replace(a, b): df["CompanyName"].replace(a, b, inplace=True) replace("maxda", "mazda") replace("porcshce", "porsche") replace("toyouta", "toyota") replace("vokswagen", "volkswagen") replace("vw", "volkswagen") df["CompanyName"].unique() sns.pairplot(df, hue="price") plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) sns.distplot(df["price"], color="red", kde=True) plt.title("Car Price Distribution", fontweight="black", pad=20, fontsize=20) plt.subplot(1, 2, 2) sns.boxplot(y=df["price"], palette="Set2") plt.title("Car Price Spread", fontweight="black", pad=20, fontsize=20) plt.tight_layout() plt.show() df["price"].agg(["min", "mean", "median", "max", "std", "skew"]).to_frame().T plt.figure(figsize=(14, 6)) counts = df["CompanyName"].value_counts() sns.barplot(x=counts.index, y=counts.values) plt.xlabel("Car Company") plt.ylabel("Total No. of cars sold") plt.title("Total Cars produced by Companies", pad=20, fontweight="black", fontsize=20) plt.xticks(rotation=90) plt.show() df[df["CompanyName"] == "mercury"] df[df["CompanyName"] == "Nissan"] df[df["CompanyName"] == "renault"] plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) sns.boxplot(x="CompanyName", y="price", data=df) plt.xticks(rotation=90) plt.title("Car Company vs Price", pad=10, fontweight="black", fontsize=20) plt.subplot(1, 2, 2) x = pd.DataFrame(df.groupby("CompanyName")["price"].mean().sort_values(ascending=False)) sns.barplot(x=x.index, y="price", data=x) plt.xticks(rotation=90) plt.title("Car Company vs Average Price", pad=10, fontweight="black", fontsize=20) plt.tight_layout() plt.show() df[df["CompanyName"] == "mercury"] df[df["CompanyName"] == "Nissan"] df[df["CompanyName"] == "renault"] # # Visualizing Car Fuel Type Feature def categorical_visualization(cols): plt.figure(figsize=(20, 8)) plt.subplot(1, 3, 1) sns.countplot(x=cols, data=df, palette="Set2", order=df[cols].value_counts().index) plt.title(f"{cols} Distribution", pad=10, fontweight="black", fontsize=18) plt.xticks(rotation=90) plt.subplot(1, 3, 2) sns.boxplot(x=cols, y="price", data=df, palette="Set2") plt.title(f"{cols} vs Price", pad=20, fontweight="black", fontsize=18) plt.xticks(rotation=90) plt.subplot(1, 3, 3) x = pd.DataFrame(df.groupby(cols)["price"].mean().sort_values(ascending=False)) sns.barplot(x=x.index, y="price", data=x, palette="Set2") plt.title(f"{cols} vs Average Price", pad=20, fontweight="black", fontsize=18) plt.xticks(rotation=90) plt.tight_layout() plt.show() categorical_visualization("fueltype") # # Visualizing Aspiration Feature categorical_visualization("aspiration") # # Visualizing Door Nubmer Feature categorical_visualization("doornumber") # --- # # Visualizing Car Body Type Feature categorical_visualization("carbody") # # Visualizing Drive Wheel Feature categorical_visualization("drivewheel") # # Visualizing Engine Location Feature categorical_visualization("enginelocation") df[df["enginelocation"] == "rear"] # # Visualizing Engine Type Feature categorical_visualization("enginetype") df[df["enginetype"] == "dohcv"] df[df["enginetype"] == "rotor"] # # Visualizing Cyclinder Number Feature categorical_visualization("cylindernumber") df[df["cylindernumber"] == "three"] df[df["cylindernumber"] == "twelve"] # # Visualizing Fuel System Feature categorical_visualization("fuelsystem") df[df["fuelsystem"] == "mfi"] df[df["fuelsystem"] == "spfi"] # # Visualizing Symboling Feature categorical_visualization("symboling") # # Visualizing "CarLength", "CarWidth","Carheight" Features w.r.t "Price" def scatter_plot(cols): x = 1 plt.figure(figsize=(15, 6)) for col in cols: plt.subplot(1, 3, x) sns.scatterplot(x=col, y="price", data=df, color="blue") plt.title(f"{col} vs Price", fontweight="black", fontsize=20, pad=10) plt.tight_layout() x += 1 scatter_plot(["carlength", "carwidth", "carheight"]) # # Visualizing "EngineSize","Boreratio" & "Stroke" Features scatter_plot(["enginesize", "boreratio", "stroke"]) # # Visualizing "Compreessionratio","Horesepower" & "Peakrpm" Features scatter_plot(["compressionratio", "horsepower", "peakrpm"]) # # Visualizing "WheelBase" & "Curbweight" Features def scatter_plot(cols): x = 1 plt.figure(figsize=(15, 6)) for col in cols: plt.subplot(1, 2, x) sns.scatterplot(x=col, y="price", data=df, color="blue") plt.title(f"{col} vs Price", fontweight="black", fontsize=20, pad=10) plt.tight_layout() x += 1 scatter_plot(["wheelbase", "curbweight"]) # # Visualizing "Citympg" & "Highwaympg" Features scatter_plot(["citympg", "highwaympg"]) # # Feature Engineering z = round(df.groupby(["CompanyName"])["price"].agg(["mean"]), 2).T z df = df.merge(z.T, how="left", on="CompanyName") bins = [0, 10000, 20000, 40000] cars_bin = ["Budget", "Medium", "Highend"] df["CarsRange"] = pd.cut(df["mean"], bins, right=False, labels=cars_bin) df.head() # # # Data Preprocessiong # new_df = df[ [ "fueltype", "aspiration", "doornumber", "carbody", "drivewheel", "enginetype", "cylindernumber", "fuelsystem", "wheelbase", "carlength", "carwidth", "curbweight", "enginesize", "boreratio", "horsepower", "citympg", "highwaympg", "price", "CarsRange", ] ] new_df.head() new_df = pd.get_dummies( columns=[ "fueltype", "aspiration", "doornumber", "carbody", "drivewheel", "enginetype", "cylindernumber", "fuelsystem", "CarsRange", ], data=new_df, ) new_df.head() # # Feature Scaling of Numerical Data scaler = StandardScaler() num_cols = [ "wheelbase", "carlength", "carwidth", "curbweight", "enginesize", "boreratio", "horsepower", "citympg", "highwaympg", ] new_df[num_cols] = scaler.fit_transform(new_df[num_cols]) new_df.head() # # Selecting Features & Labels for Model Training & Testing x = new_df.drop(columns=["price"]) y = new_df["price"] x.shape y.shape # # Splitting Data for Model Traning & Testing x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.33, random_state=42 ) print("x_train - > ", x_train.shape) print("x_test - > ", x_test.shape) print("y_train - > ", y_train.shape) print("y_test - > ", y_test.shape) # # # Model Building training_score = [] testing_score = [] def model_prediction(model): model.fit(x_train, y_train) x_train_pred = model.predict(x_train) x_test_pred = model.predict(x_test) a = r2_score(y_train, x_train_pred) * 100 b = r2_score(y_test, x_test_pred) * 100 training_score.append(a) testing_score.append(b) print(f"r2_Score of {model} model on Training Data is:", a) print(f"r2_Score of {model} model on Testing Data is:", b) # # Linear-Regression Model model_prediction(LinearRegression()) # # Decision-Tree-Regressor Model model_prediction(DecisionTreeRegressor()) # # Random-Forest-Regressor Model model_prediction(RandomForestRegressor()) # # Ada-Boost-Regressor Model model_prediction(AdaBoostRegressor()) # # Gradient-Boosting-Regressor Model model_prediction(GradientBoostingRegressor()) # # LGMB Regressor Model model_prediction(LGBMRegressor()) # # XGBRegressor Model model_prediction(XGBRegressor()) # # Cat-Boost-Regressor Model model_prediction(CatBoostRegressor(verbose=False)) # # # All Model Performance Comparison models = [ "Linear Regression", "Decision Tree", "Random Forest", "Ada Boost", "Gradient Boost", "LGBM", "XGBoost", "CatBoost", ] df = pd.DataFrame( { "Algorithms": models, "Training Score": training_score, "Testing Score": testing_score, } ) df # # Plotting above results using column-bar chart df.plot( x="Algorithms", y=["Training Score", "Testing Score"], figsize=(16, 6), kind="bar", title="Performance Visualization of Different Models", colormap="Set1", ) plt.show()
[{"car-price-prediction/CarPrice_Assignment.csv": {"column_names": "[\"car_ID\", \"symboling\", \"CarName\", \"fueltype\", \"aspiration\", \"doornumber\", \"carbody\", \"drivewheel\", \"enginelocation\", \"wheelbase\", \"carlength\", \"carwidth\", \"carheight\", \"curbweight\", \"enginetype\", \"cylindernumber\", \"enginesize\", \"fuelsystem\", \"boreratio\", \"stroke\", \"compressionratio\", \"horsepower\", \"peakrpm\", \"citympg\", \"highwaympg\", \"price\"]", "column_data_types": "{\"car_ID\": \"int64\", \"symboling\": \"int64\", \"CarName\": \"object\", \"fueltype\": \"object\", \"aspiration\": \"object\", \"doornumber\": \"object\", \"carbody\": \"object\", \"drivewheel\": \"object\", \"enginelocation\": \"object\", \"wheelbase\": \"float64\", \"carlength\": \"float64\", \"carwidth\": \"float64\", \"carheight\": \"float64\", \"curbweight\": \"int64\", \"enginetype\": \"object\", \"cylindernumber\": \"object\", \"enginesize\": \"int64\", \"fuelsystem\": \"object\", \"boreratio\": \"float64\", \"stroke\": \"float64\", \"compressionratio\": \"float64\", \"horsepower\": \"int64\", \"peakrpm\": \"int64\", \"citympg\": \"int64\", \"highwaympg\": \"int64\", \"price\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 205 entries, 0 to 204\nData columns (total 26 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 car_ID 205 non-null int64 \n 1 symboling 205 non-null int64 \n 2 CarName 205 non-null object \n 3 fueltype 205 non-null object \n 4 aspiration 205 non-null object \n 5 doornumber 205 non-null object \n 6 carbody 205 non-null object \n 7 drivewheel 205 non-null object \n 8 enginelocation 205 non-null object \n 9 wheelbase 205 non-null float64\n 10 carlength 205 non-null float64\n 11 carwidth 205 non-null float64\n 12 carheight 205 non-null float64\n 13 curbweight 205 non-null int64 \n 14 enginetype 205 non-null object \n 15 cylindernumber 205 non-null object \n 16 enginesize 205 non-null int64 \n 17 fuelsystem 205 non-null object \n 18 boreratio 205 non-null float64\n 19 stroke 205 non-null float64\n 20 compressionratio 205 non-null float64\n 21 horsepower 205 non-null int64 \n 22 peakrpm 205 non-null int64 \n 23 citympg 205 non-null int64 \n 24 highwaympg 205 non-null int64 \n 25 price 205 non-null float64\ndtypes: float64(8), int64(8), object(10)\nmemory usage: 41.8+ KB\n", "summary": "{\"car_ID\": {\"count\": 205.0, \"mean\": 103.0, \"std\": 59.32256456582661, \"min\": 1.0, \"25%\": 52.0, \"50%\": 103.0, \"75%\": 154.0, \"max\": 205.0}, \"symboling\": {\"count\": 205.0, \"mean\": 0.8341463414634146, \"std\": 1.2453068281055297, \"min\": -2.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 3.0}, \"wheelbase\": {\"count\": 205.0, \"mean\": 98.75658536585367, \"std\": 6.021775685025571, \"min\": 86.6, \"25%\": 94.5, \"50%\": 97.0, \"75%\": 102.4, \"max\": 120.9}, \"carlength\": {\"count\": 205.0, \"mean\": 174.04926829268288, \"std\": 12.33728852655518, \"min\": 141.1, \"25%\": 166.3, \"50%\": 173.2, \"75%\": 183.1, \"max\": 208.1}, \"carwidth\": {\"count\": 205.0, \"mean\": 65.90780487804878, \"std\": 2.145203852687183, \"min\": 60.3, \"25%\": 64.1, \"50%\": 65.5, \"75%\": 66.9, \"max\": 72.3}, \"carheight\": {\"count\": 205.0, \"mean\": 53.72487804878049, \"std\": 2.4435219699049036, \"min\": 47.8, \"25%\": 52.0, \"50%\": 54.1, \"75%\": 55.5, \"max\": 59.8}, \"curbweight\": {\"count\": 205.0, \"mean\": 2555.5658536585365, \"std\": 520.6802035016387, \"min\": 1488.0, \"25%\": 2145.0, \"50%\": 2414.0, \"75%\": 2935.0, \"max\": 4066.0}, \"enginesize\": {\"count\": 205.0, \"mean\": 126.90731707317073, \"std\": 41.64269343817984, \"min\": 61.0, \"25%\": 97.0, \"50%\": 120.0, \"75%\": 141.0, \"max\": 326.0}, \"boreratio\": {\"count\": 205.0, \"mean\": 3.329756097560975, \"std\": 0.27084370542622926, \"min\": 2.54, \"25%\": 3.15, \"50%\": 3.31, \"75%\": 3.58, \"max\": 3.94}, \"stroke\": {\"count\": 205.0, \"mean\": 3.255414634146341, \"std\": 0.31359701376080407, \"min\": 2.07, \"25%\": 3.11, \"50%\": 3.29, \"75%\": 3.41, \"max\": 4.17}, \"compressionratio\": {\"count\": 205.0, \"mean\": 10.142536585365855, \"std\": 3.972040321863298, \"min\": 7.0, \"25%\": 8.6, \"50%\": 9.0, \"75%\": 9.4, \"max\": 23.0}, \"horsepower\": {\"count\": 205.0, \"mean\": 104.1170731707317, \"std\": 39.54416680936116, \"min\": 48.0, \"25%\": 70.0, \"50%\": 95.0, \"75%\": 116.0, \"max\": 288.0}, \"peakrpm\": {\"count\": 205.0, \"mean\": 5125.121951219512, \"std\": 476.98564305694634, \"min\": 4150.0, \"25%\": 4800.0, \"50%\": 5200.0, \"75%\": 5500.0, \"max\": 6600.0}, \"citympg\": {\"count\": 205.0, \"mean\": 25.21951219512195, \"std\": 6.542141653001622, \"min\": 13.0, \"25%\": 19.0, \"50%\": 24.0, \"75%\": 30.0, \"max\": 49.0}, \"highwaympg\": {\"count\": 205.0, \"mean\": 30.75121951219512, \"std\": 6.886443130941824, \"min\": 16.0, \"25%\": 25.0, \"50%\": 30.0, \"75%\": 34.0, \"max\": 54.0}, \"price\": {\"count\": 205.0, \"mean\": 13276.710570731706, \"std\": 7988.85233174315, \"min\": 5118.0, \"25%\": 7788.0, \"50%\": 10295.0, \"75%\": 16503.0, \"max\": 45400.0}}", "examples": "{\"car_ID\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"symboling\":{\"0\":3,\"1\":3,\"2\":1,\"3\":2},\"CarName\":{\"0\":\"alfa-romero giulia\",\"1\":\"alfa-romero stelvio\",\"2\":\"alfa-romero Quadrifoglio\",\"3\":\"audi 100 ls\"},\"fueltype\":{\"0\":\"gas\",\"1\":\"gas\",\"2\":\"gas\",\"3\":\"gas\"},\"aspiration\":{\"0\":\"std\",\"1\":\"std\",\"2\":\"std\",\"3\":\"std\"},\"doornumber\":{\"0\":\"two\",\"1\":\"two\",\"2\":\"two\",\"3\":\"four\"},\"carbody\":{\"0\":\"convertible\",\"1\":\"convertible\",\"2\":\"hatchback\",\"3\":\"sedan\"},\"drivewheel\":{\"0\":\"rwd\",\"1\":\"rwd\",\"2\":\"rwd\",\"3\":\"fwd\"},\"enginelocation\":{\"0\":\"front\",\"1\":\"front\",\"2\":\"front\",\"3\":\"front\"},\"wheelbase\":{\"0\":88.6,\"1\":88.6,\"2\":94.5,\"3\":99.8},\"carlength\":{\"0\":168.8,\"1\":168.8,\"2\":171.2,\"3\":176.6},\"carwidth\":{\"0\":64.1,\"1\":64.1,\"2\":65.5,\"3\":66.2},\"carheight\":{\"0\":48.8,\"1\":48.8,\"2\":52.4,\"3\":54.3},\"curbweight\":{\"0\":2548,\"1\":2548,\"2\":2823,\"3\":2337},\"enginetype\":{\"0\":\"dohc\",\"1\":\"dohc\",\"2\":\"ohcv\",\"3\":\"ohc\"},\"cylindernumber\":{\"0\":\"four\",\"1\":\"four\",\"2\":\"six\",\"3\":\"four\"},\"enginesize\":{\"0\":130,\"1\":130,\"2\":152,\"3\":109},\"fuelsystem\":{\"0\":\"mpfi\",\"1\":\"mpfi\",\"2\":\"mpfi\",\"3\":\"mpfi\"},\"boreratio\":{\"0\":3.47,\"1\":3.47,\"2\":2.68,\"3\":3.19},\"stroke\":{\"0\":2.68,\"1\":2.68,\"2\":3.47,\"3\":3.4},\"compressionratio\":{\"0\":9.0,\"1\":9.0,\"2\":9.0,\"3\":10.0},\"horsepower\":{\"0\":111,\"1\":111,\"2\":154,\"3\":102},\"peakrpm\":{\"0\":5000,\"1\":5000,\"2\":5000,\"3\":5500},\"citympg\":{\"0\":21,\"1\":21,\"2\":19,\"3\":24},\"highwaympg\":{\"0\":27,\"1\":27,\"2\":26,\"3\":30},\"price\":{\"0\":13495.0,\"1\":16500.0,\"2\":16500.0,\"3\":13950.0}}"}}]
true
1
<start_data_description><data_path>car-price-prediction/CarPrice_Assignment.csv: <column_names> ['car_ID', 'symboling', 'CarName', 'fueltype', 'aspiration', 'doornumber', 'carbody', 'drivewheel', 'enginelocation', 'wheelbase', 'carlength', 'carwidth', 'carheight', 'curbweight', 'enginetype', 'cylindernumber', 'enginesize', 'fuelsystem', 'boreratio', 'stroke', 'compressionratio', 'horsepower', 'peakrpm', 'citympg', 'highwaympg', 'price'] <column_types> {'car_ID': 'int64', 'symboling': 'int64', 'CarName': 'object', 'fueltype': 'object', 'aspiration': 'object', 'doornumber': 'object', 'carbody': 'object', 'drivewheel': 'object', 'enginelocation': 'object', 'wheelbase': 'float64', 'carlength': 'float64', 'carwidth': 'float64', 'carheight': 'float64', 'curbweight': 'int64', 'enginetype': 'object', 'cylindernumber': 'object', 'enginesize': 'int64', 'fuelsystem': 'object', 'boreratio': 'float64', 'stroke': 'float64', 'compressionratio': 'float64', 'horsepower': 'int64', 'peakrpm': 'int64', 'citympg': 'int64', 'highwaympg': 'int64', 'price': 'float64'} <dataframe_Summary> {'car_ID': {'count': 205.0, 'mean': 103.0, 'std': 59.32256456582661, 'min': 1.0, '25%': 52.0, '50%': 103.0, '75%': 154.0, 'max': 205.0}, 'symboling': {'count': 205.0, 'mean': 0.8341463414634146, 'std': 1.2453068281055297, 'min': -2.0, '25%': 0.0, '50%': 1.0, '75%': 2.0, 'max': 3.0}, 'wheelbase': {'count': 205.0, 'mean': 98.75658536585367, 'std': 6.021775685025571, 'min': 86.6, '25%': 94.5, '50%': 97.0, '75%': 102.4, 'max': 120.9}, 'carlength': {'count': 205.0, 'mean': 174.04926829268288, 'std': 12.33728852655518, 'min': 141.1, '25%': 166.3, '50%': 173.2, '75%': 183.1, 'max': 208.1}, 'carwidth': {'count': 205.0, 'mean': 65.90780487804878, 'std': 2.145203852687183, 'min': 60.3, '25%': 64.1, '50%': 65.5, '75%': 66.9, 'max': 72.3}, 'carheight': {'count': 205.0, 'mean': 53.72487804878049, 'std': 2.4435219699049036, 'min': 47.8, '25%': 52.0, '50%': 54.1, '75%': 55.5, 'max': 59.8}, 'curbweight': {'count': 205.0, 'mean': 2555.5658536585365, 'std': 520.6802035016387, 'min': 1488.0, '25%': 2145.0, '50%': 2414.0, '75%': 2935.0, 'max': 4066.0}, 'enginesize': {'count': 205.0, 'mean': 126.90731707317073, 'std': 41.64269343817984, 'min': 61.0, '25%': 97.0, '50%': 120.0, '75%': 141.0, 'max': 326.0}, 'boreratio': {'count': 205.0, 'mean': 3.329756097560975, 'std': 0.27084370542622926, 'min': 2.54, '25%': 3.15, '50%': 3.31, '75%': 3.58, 'max': 3.94}, 'stroke': {'count': 205.0, 'mean': 3.255414634146341, 'std': 0.31359701376080407, 'min': 2.07, '25%': 3.11, '50%': 3.29, '75%': 3.41, 'max': 4.17}, 'compressionratio': {'count': 205.0, 'mean': 10.142536585365855, 'std': 3.972040321863298, 'min': 7.0, '25%': 8.6, '50%': 9.0, '75%': 9.4, 'max': 23.0}, 'horsepower': {'count': 205.0, 'mean': 104.1170731707317, 'std': 39.54416680936116, 'min': 48.0, '25%': 70.0, '50%': 95.0, '75%': 116.0, 'max': 288.0}, 'peakrpm': {'count': 205.0, 'mean': 5125.121951219512, 'std': 476.98564305694634, 'min': 4150.0, '25%': 4800.0, '50%': 5200.0, '75%': 5500.0, 'max': 6600.0}, 'citympg': {'count': 205.0, 'mean': 25.21951219512195, 'std': 6.542141653001622, 'min': 13.0, '25%': 19.0, '50%': 24.0, '75%': 30.0, 'max': 49.0}, 'highwaympg': {'count': 205.0, 'mean': 30.75121951219512, 'std': 6.886443130941824, 'min': 16.0, '25%': 25.0, '50%': 30.0, '75%': 34.0, 'max': 54.0}, 'price': {'count': 205.0, 'mean': 13276.710570731706, 'std': 7988.85233174315, 'min': 5118.0, '25%': 7788.0, '50%': 10295.0, '75%': 16503.0, 'max': 45400.0}} <dataframe_info> RangeIndex: 205 entries, 0 to 204 Data columns (total 26 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 car_ID 205 non-null int64 1 symboling 205 non-null int64 2 CarName 205 non-null object 3 fueltype 205 non-null object 4 aspiration 205 non-null object 5 doornumber 205 non-null object 6 carbody 205 non-null object 7 drivewheel 205 non-null object 8 enginelocation 205 non-null object 9 wheelbase 205 non-null float64 10 carlength 205 non-null float64 11 carwidth 205 non-null float64 12 carheight 205 non-null float64 13 curbweight 205 non-null int64 14 enginetype 205 non-null object 15 cylindernumber 205 non-null object 16 enginesize 205 non-null int64 17 fuelsystem 205 non-null object 18 boreratio 205 non-null float64 19 stroke 205 non-null float64 20 compressionratio 205 non-null float64 21 horsepower 205 non-null int64 22 peakrpm 205 non-null int64 23 citympg 205 non-null int64 24 highwaympg 205 non-null int64 25 price 205 non-null float64 dtypes: float64(8), int64(8), object(10) memory usage: 41.8+ KB <some_examples> {'car_ID': {'0': 1, '1': 2, '2': 3, '3': 4}, 'symboling': {'0': 3, '1': 3, '2': 1, '3': 2}, 'CarName': {'0': 'alfa-romero giulia', '1': 'alfa-romero stelvio', '2': 'alfa-romero Quadrifoglio', '3': 'audi 100 ls'}, 'fueltype': {'0': 'gas', '1': 'gas', '2': 'gas', '3': 'gas'}, 'aspiration': {'0': 'std', '1': 'std', '2': 'std', '3': 'std'}, 'doornumber': {'0': 'two', '1': 'two', '2': 'two', '3': 'four'}, 'carbody': {'0': 'convertible', '1': 'convertible', '2': 'hatchback', '3': 'sedan'}, 'drivewheel': {'0': 'rwd', '1': 'rwd', '2': 'rwd', '3': 'fwd'}, 'enginelocation': {'0': 'front', '1': 'front', '2': 'front', '3': 'front'}, 'wheelbase': {'0': 88.6, '1': 88.6, '2': 94.5, '3': 99.8}, 'carlength': {'0': 168.8, '1': 168.8, '2': 171.2, '3': 176.6}, 'carwidth': {'0': 64.1, '1': 64.1, '2': 65.5, '3': 66.2}, 'carheight': {'0': 48.8, '1': 48.8, '2': 52.4, '3': 54.3}, 'curbweight': {'0': 2548, '1': 2548, '2': 2823, '3': 2337}, 'enginetype': {'0': 'dohc', '1': 'dohc', '2': 'ohcv', '3': 'ohc'}, 'cylindernumber': {'0': 'four', '1': 'four', '2': 'six', '3': 'four'}, 'enginesize': {'0': 130, '1': 130, '2': 152, '3': 109}, 'fuelsystem': {'0': 'mpfi', '1': 'mpfi', '2': 'mpfi', '3': 'mpfi'}, 'boreratio': {'0': 3.47, '1': 3.47, '2': 2.68, '3': 3.19}, 'stroke': {'0': 2.68, '1': 2.68, '2': 3.47, '3': 3.4}, 'compressionratio': {'0': 9.0, '1': 9.0, '2': 9.0, '3': 10.0}, 'horsepower': {'0': 111, '1': 111, '2': 154, '3': 102}, 'peakrpm': {'0': 5000, '1': 5000, '2': 5000, '3': 5500}, 'citympg': {'0': 21, '1': 21, '2': 19, '3': 24}, 'highwaympg': {'0': 27, '1': 27, '2': 26, '3': 30}, 'price': {'0': 13495.0, '1': 16500.0, '2': 16500.0, '3': 13950.0}} <end_description>
3,115
0
4,822
3,115
129207541
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go import seaborn as sns # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # # 🚀SPACESHIP TITANIC - 🧠 Ai Project # ## BY: # - Minal Alarm # - Rayyan Ahmed # - Shaikh Abdul Rafay # # # # EDA on the data # - we'll be looking over train data # - Understand the data # - Detect anomalies # - target varible is Transported test = pd.read_csv(r"/kaggle/input/spaceship-titanic/test.csv") train = pd.read_csv(r"/kaggle/input/spaceship-titanic/train.csv") train.head() print("Train Shape: ", train.shape) print("Number of values in Train: ", train.count().sum()) print("Total Number of missing values in Train: ", train.isna().sum().sum()) train.info() train.isna().sum() train.nunique() FEATURES = [col for col in train.columns if col != "Transported"] df = pd.concat([train[FEATURES], test[FEATURES]], axis=0) text_features = ["Cabin", "Name"] cat_features = [ col for col in FEATURES if df[col].nunique() < 25 and col not in text_features ] cont_features = [ col for col in FEATURES if df[col].nunique() >= 25 and col not in text_features ] del df print("Total number of features:", len(FEATURES)) print("Number of categorical features:", len(cat_features)) print("Number of continuos features:", len(cont_features)) print("Number of text features:", len(text_features)) print(f"\033[1m\033[91m\t\t\t\tValue Counts") for i in cat_features: print("\n--------------------------------") print(f"\t\033[1m{i}\033[0m") print(f"\033[91m--------------------------------") print(train[i].value_counts()) print("--------------------------------") print(f"\t\033[1m\033[91m\t\t\t\tValue Counts") for i in cont_features: print("\n--------------------------------") print(f"\t\033[1m{i}\033[0m") print(f"\033[91m--------------------------------") pt = train.pivot_table(values=i, index="Transported", aggfunc="mean") print(pt) print("--------------------------------") print(f"\t\033[1m\033[91m\t\t\tValue Counts against Transported") for i in cat_features: print("\n--------------------------------") print(f"\t\033[1m{i}\033[0m") print(f"\033[91m--------------------------------") pt = train.pivot_table(values="Transported", index=i, aggfunc="count") print(pt) print("--------------------------------") print("\n") train.iloc[:, :-1].describe().T.sort_values( by="std", ascending=False ).style.background_gradient(cmap="GnBu").bar(subset=["max"], color="#BB0000").bar( subset=[ "mean", ], color="green", ) train.describe(include="object") # ## Observations # - 14 columns # - 12 features (1 is target and 1 is passenger id) # - 2324 null values in all 12 features # - The average age of passengers on board is 29, the oldest passenger is 79 years old # - most of the passengers are from the Earth planet # - The number of transported passengers are more than not transported # - alot of categorical data # - null values # - object data type will have to do intensive preprocessing # - high skewness will have to clean the data # # Visualisation # - Present data clearly # - Identify patterns # - Support decision-making # - merging test and train for better understaing # ### we'll use: # - Matplotlib # - Seaborn # - Plotly.express # - plotly.graph_objects # Correlation Matrix fig = px.imshow(train.corr(), text_auto=True) fig.show() # Distribution of Age train_age = train.copy() test_age = train.copy() # adding column to identify which data set row is from train_age["type"] = "Train" test_age["type"] = "Test" # merging data sets ageDf = pd.concat([train_age, test_age]) fig = px.histogram( data_frame=ageDf, x="Age", color="type", color_discrete_sequence=["#aa2494", "#1c078e"], marginal="box", nbins=100, template="plotly_white", ) fig.update_layout(title="Distribution of Age", title_x=0.5) fig.show() if len(cat_features) == 0: print("No Categorical features") else: ncols = 2 nrows = 2 fig, axes = plt.subplots(nrows, ncols, figsize=(18, 10)) for r in range(nrows): for c in range(ncols): col = cat_features[r * ncols + c] sns.countplot( train[col], ax=axes[r, c], palette="viridis", label="Train data" ) sns.countplot(test[col], ax=axes[r, c], palette="magma", label="Test data") axes[r, c].legend() axes[r, c].set_ylabel("") axes[r, c].set_xlabel(col, fontsize=20) axes[r, c].tick_params(labelsize=10, width=0.5) axes[r, c].xaxis.offsetText.set_fontsize(4) axes[r, c].yaxis.offsetText.set_fontsize(4) plt.show() pt = train.pivot_table(values="Transported", index="HomePlanet", aggfunc="count") # px.bar(pt, x = pt.index , y= 'HomePlanet' , color = 'Transported')
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/207/129207541.ipynb
null
null
[{"Id": 129207541, "ScriptId": 38396176, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 12746105, "CreationDate": "05/11/2023 20:38:05", "VersionNumber": 1.0, "Title": "Ai project", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 175.0, "LinesInsertedFromPrevious": 175.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go import seaborn as sns # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # # 🚀SPACESHIP TITANIC - 🧠 Ai Project # ## BY: # - Minal Alarm # - Rayyan Ahmed # - Shaikh Abdul Rafay # # # # EDA on the data # - we'll be looking over train data # - Understand the data # - Detect anomalies # - target varible is Transported test = pd.read_csv(r"/kaggle/input/spaceship-titanic/test.csv") train = pd.read_csv(r"/kaggle/input/spaceship-titanic/train.csv") train.head() print("Train Shape: ", train.shape) print("Number of values in Train: ", train.count().sum()) print("Total Number of missing values in Train: ", train.isna().sum().sum()) train.info() train.isna().sum() train.nunique() FEATURES = [col for col in train.columns if col != "Transported"] df = pd.concat([train[FEATURES], test[FEATURES]], axis=0) text_features = ["Cabin", "Name"] cat_features = [ col for col in FEATURES if df[col].nunique() < 25 and col not in text_features ] cont_features = [ col for col in FEATURES if df[col].nunique() >= 25 and col not in text_features ] del df print("Total number of features:", len(FEATURES)) print("Number of categorical features:", len(cat_features)) print("Number of continuos features:", len(cont_features)) print("Number of text features:", len(text_features)) print(f"\033[1m\033[91m\t\t\t\tValue Counts") for i in cat_features: print("\n--------------------------------") print(f"\t\033[1m{i}\033[0m") print(f"\033[91m--------------------------------") print(train[i].value_counts()) print("--------------------------------") print(f"\t\033[1m\033[91m\t\t\t\tValue Counts") for i in cont_features: print("\n--------------------------------") print(f"\t\033[1m{i}\033[0m") print(f"\033[91m--------------------------------") pt = train.pivot_table(values=i, index="Transported", aggfunc="mean") print(pt) print("--------------------------------") print(f"\t\033[1m\033[91m\t\t\tValue Counts against Transported") for i in cat_features: print("\n--------------------------------") print(f"\t\033[1m{i}\033[0m") print(f"\033[91m--------------------------------") pt = train.pivot_table(values="Transported", index=i, aggfunc="count") print(pt) print("--------------------------------") print("\n") train.iloc[:, :-1].describe().T.sort_values( by="std", ascending=False ).style.background_gradient(cmap="GnBu").bar(subset=["max"], color="#BB0000").bar( subset=[ "mean", ], color="green", ) train.describe(include="object") # ## Observations # - 14 columns # - 12 features (1 is target and 1 is passenger id) # - 2324 null values in all 12 features # - The average age of passengers on board is 29, the oldest passenger is 79 years old # - most of the passengers are from the Earth planet # - The number of transported passengers are more than not transported # - alot of categorical data # - null values # - object data type will have to do intensive preprocessing # - high skewness will have to clean the data # # Visualisation # - Present data clearly # - Identify patterns # - Support decision-making # - merging test and train for better understaing # ### we'll use: # - Matplotlib # - Seaborn # - Plotly.express # - plotly.graph_objects # Correlation Matrix fig = px.imshow(train.corr(), text_auto=True) fig.show() # Distribution of Age train_age = train.copy() test_age = train.copy() # adding column to identify which data set row is from train_age["type"] = "Train" test_age["type"] = "Test" # merging data sets ageDf = pd.concat([train_age, test_age]) fig = px.histogram( data_frame=ageDf, x="Age", color="type", color_discrete_sequence=["#aa2494", "#1c078e"], marginal="box", nbins=100, template="plotly_white", ) fig.update_layout(title="Distribution of Age", title_x=0.5) fig.show() if len(cat_features) == 0: print("No Categorical features") else: ncols = 2 nrows = 2 fig, axes = plt.subplots(nrows, ncols, figsize=(18, 10)) for r in range(nrows): for c in range(ncols): col = cat_features[r * ncols + c] sns.countplot( train[col], ax=axes[r, c], palette="viridis", label="Train data" ) sns.countplot(test[col], ax=axes[r, c], palette="magma", label="Test data") axes[r, c].legend() axes[r, c].set_ylabel("") axes[r, c].set_xlabel(col, fontsize=20) axes[r, c].tick_params(labelsize=10, width=0.5) axes[r, c].xaxis.offsetText.set_fontsize(4) axes[r, c].yaxis.offsetText.set_fontsize(4) plt.show() pt = train.pivot_table(values="Transported", index="HomePlanet", aggfunc="count") # px.bar(pt, x = pt.index , y= 'HomePlanet' , color = 'Transported')
false
0
1,743
0
1,743
1,743
129207986
# # Análise da relação de alunos com projetos filiados do programa Meninas Digitais import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # ## Importando dados e Funções import numpy as np import pandas as pd dados_alunos = pd.read_csv( "/kaggle/input/dados-inep-2019-teste/SUP_ALUNO_2009_2019_COMPLETO_TIC.csv", sep="|", encoding="ISO-8859-1", ) dados_alunos.head() dados_meninas = pd.read_csv( "/kaggle/input/anlise-de-impacto-do-programa-meninas-digitais/Copy of Projetos Meninas Digitais - Tabela de AGOSTO (01_08_2022).csv" ) dados_meninas.head() # Selecionando apenas os dados de projetos válidos dados_meninas = dados_meninas.loc[dados_meninas.Válido == True] dados_meninas.Válido.unique() dados_meninas["Código IES"].isnull().mean() # selecionando apenas os projetos com universidades associadas dados_meninas = dados_meninas.loc[ (dados_meninas["Código IES"] != "-") & (~dados_meninas["Código IES"].isna()) ] dados_meninas["Código IES"].unique() dados_meninas["Ano"] = pd.to_numeric(dados_meninas.Ano) dados_meninas["Código IES"] = dados_meninas["Código IES"].astype("int") dados_meninas.info() # Selecionando apenas os projetos que surgiram até 2019 dados_meninas = dados_meninas.loc[dados_meninas.Ano <= 2019] dados_meninas.groupby(["Ano"])["Ano"].count().plot(kind="bar") dados_alunos.groupby(["TP_MODALIDADE_ENSINO"])["TP_MODALIDADE_ENSINO"].count().plot( kind="bar" ) # Selecionando apenas os alunos de cursos presenciais dados_alunos = dados_alunos.loc[dados_alunos.TP_MODALIDADE_ENSINO == 1] dados_alunos.groupby(["TP_MODALIDADE_ENSINO"])["TP_MODALIDADE_ENSINO"].count().plot( kind="bar" ) dados_alunos.groupby(["TP_CATEGORIA_ADMINISTRATIVA"])[ "TP_CATEGORIA_ADMINISTRATIVA" ].count().plot(kind="bar") # Selecionando apenas alunos de universidades públicas estaduais, federais e municipais dados_alunos = dados_alunos.loc[ (dados_alunos.TP_CATEGORIA_ADMINISTRATIVA == 1) | (dados_alunos.TP_CATEGORIA_ADMINISTRATIVA == 2) | (dados_alunos.TP_CATEGORIA_ADMINISTRATIVA == 3) ] dados_alunos.groupby(["TP_CATEGORIA_ADMINISTRATIVA"])[ "TP_CATEGORIA_ADMINISTRATIVA" ].count().plot(kind="bar") dados_meninas["Código IES"].sort_values().unique() df = pd.merge(dados_alunos, dados_meninas, left_on="CO_IES", right_on="Código IES") df["Código IES"].sort_values().unique() # As universidades que não aparecem na base mergeada são particulares dados_meninas.loc[dados_meninas["Código IES"].isin([344, 423, 532, 3543])] # salvando a base criada df.head() df.CO_IES.sort_values().unique() df.to_csv("alunos_projeto.csv", index=False)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/207/129207986.ipynb
null
null
[{"Id": 129207986, "ScriptId": 29623480, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 8521481, "CreationDate": "05/11/2023 20:45:27", "VersionNumber": 1.0, "Title": "Juntando base de alunos com projetos", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 91.0, "LinesInsertedFromPrevious": 91.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
# # Análise da relação de alunos com projetos filiados do programa Meninas Digitais import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # ## Importando dados e Funções import numpy as np import pandas as pd dados_alunos = pd.read_csv( "/kaggle/input/dados-inep-2019-teste/SUP_ALUNO_2009_2019_COMPLETO_TIC.csv", sep="|", encoding="ISO-8859-1", ) dados_alunos.head() dados_meninas = pd.read_csv( "/kaggle/input/anlise-de-impacto-do-programa-meninas-digitais/Copy of Projetos Meninas Digitais - Tabela de AGOSTO (01_08_2022).csv" ) dados_meninas.head() # Selecionando apenas os dados de projetos válidos dados_meninas = dados_meninas.loc[dados_meninas.Válido == True] dados_meninas.Válido.unique() dados_meninas["Código IES"].isnull().mean() # selecionando apenas os projetos com universidades associadas dados_meninas = dados_meninas.loc[ (dados_meninas["Código IES"] != "-") & (~dados_meninas["Código IES"].isna()) ] dados_meninas["Código IES"].unique() dados_meninas["Ano"] = pd.to_numeric(dados_meninas.Ano) dados_meninas["Código IES"] = dados_meninas["Código IES"].astype("int") dados_meninas.info() # Selecionando apenas os projetos que surgiram até 2019 dados_meninas = dados_meninas.loc[dados_meninas.Ano <= 2019] dados_meninas.groupby(["Ano"])["Ano"].count().plot(kind="bar") dados_alunos.groupby(["TP_MODALIDADE_ENSINO"])["TP_MODALIDADE_ENSINO"].count().plot( kind="bar" ) # Selecionando apenas os alunos de cursos presenciais dados_alunos = dados_alunos.loc[dados_alunos.TP_MODALIDADE_ENSINO == 1] dados_alunos.groupby(["TP_MODALIDADE_ENSINO"])["TP_MODALIDADE_ENSINO"].count().plot( kind="bar" ) dados_alunos.groupby(["TP_CATEGORIA_ADMINISTRATIVA"])[ "TP_CATEGORIA_ADMINISTRATIVA" ].count().plot(kind="bar") # Selecionando apenas alunos de universidades públicas estaduais, federais e municipais dados_alunos = dados_alunos.loc[ (dados_alunos.TP_CATEGORIA_ADMINISTRATIVA == 1) | (dados_alunos.TP_CATEGORIA_ADMINISTRATIVA == 2) | (dados_alunos.TP_CATEGORIA_ADMINISTRATIVA == 3) ] dados_alunos.groupby(["TP_CATEGORIA_ADMINISTRATIVA"])[ "TP_CATEGORIA_ADMINISTRATIVA" ].count().plot(kind="bar") dados_meninas["Código IES"].sort_values().unique() df = pd.merge(dados_alunos, dados_meninas, left_on="CO_IES", right_on="Código IES") df["Código IES"].sort_values().unique() # As universidades que não aparecem na base mergeada são particulares dados_meninas.loc[dados_meninas["Código IES"].isin([344, 423, 532, 3543])] # salvando a base criada df.head() df.CO_IES.sort_values().unique() df.to_csv("alunos_projeto.csv", index=False)
false
0
1,179
0
1,179
1,179
129207985
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import LabelEncoder from sklearn import metrics traindata = pd.read_csv("/kaggle/input/traincsv/train (1) (1).csv") traindata traindata.info x = traindata.drop("stroke", axis=1) y = traindata["stroke"] from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3) CLF = KNeighborsClassifier(n_neighbors=5, weights="distance", algorithm="kd_tree") columns_to_encode = [ "gender", "ever_married", "work_type", "Residence_type", "smoking_status", ] x_train_encoded = pd.get_dummies(x_train, columns=columns_to_encode) x_train_encoded.dropna(inplace=True) CLF.fit(x_train_encoded, y_train)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/207/129207985.ipynb
null
null
[{"Id": 129207985, "ScriptId": 38411859, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11779896, "CreationDate": "05/11/2023 20:45:26", "VersionNumber": 1.0, "Title": "notebook76e9158390", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 34.0, "LinesInsertedFromPrevious": 34.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import LabelEncoder from sklearn import metrics traindata = pd.read_csv("/kaggle/input/traincsv/train (1) (1).csv") traindata traindata.info x = traindata.drop("stroke", axis=1) y = traindata["stroke"] from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3) CLF = KNeighborsClassifier(n_neighbors=5, weights="distance", algorithm="kd_tree") columns_to_encode = [ "gender", "ever_married", "work_type", "Residence_type", "smoking_status", ] x_train_encoded = pd.get_dummies(x_train, columns=columns_to_encode) x_train_encoded.dropna(inplace=True) CLF.fit(x_train_encoded, y_train)
false
0
294
0
294
294
129207038
<jupyter_start><jupyter_text>PNDM Prediction Dataset Unravel the mysteries of Permanent Neonatal Diabetes Mellitus (PNDM) and help doctors diagnose this rare but life-threatening condition earlier with our simulated PNDM prediction dataset. Inspired by real-world medical data and cutting-edge research, this comprehensive dataset includes six features that could help predict PNDM: age at diagnosis, HbA1c levels, genetic information, family history, clinical features, and laboratory data. But beware! Preprocessing the data presents many challenges, including handling missing values, outliers, class imbalance, and scaling and normalization issues. To tackle these challenges, we recommend using the latest data science tools and techniques, including feature selection, imputation, outlier detection, and scaling and normalization methods. Help advance medical research and save lives by exploring the complex world of PNDM with our challenging and exciting dataset. Kaggle dataset identifier: pndm-prediction-dataset <jupyter_script>import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # # PNDM Prediction # This notebook looks into using various Python-based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has Permanent Neonatal Diabetes Mellitus (PNDM). # We're going to take the following approach: # 1. Problem definition # 2. Data # 3. Evaluation # 4. Modelling # ## Problem Definition # Predicting if a patient has a PNDM or not by using given clinical parameters is a binary classification problem. # ## Data # The sample dataset is taken from [Kaggle](https://www.kaggle.com/datasets/slmsshk/pndm-prediction-dataset). The dataset has 7 features to be used predicting the label which is PNDM columns in the dataset. You can see the data dictionary [here](./Data/README.md). # ## Evaluation # > If we can reach 95% accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project. # ⚠️ **Note:** Due to the nature of experimentation, the evaluation metric may change over time. # ## Import Libraries # We're going to use: # - [pandas](https://pandas.pydata.org/) for data analysis. # - [NumPy](https://numpy.org/) for numerical operations. # - [Matplotlib](https://matplotlib.org/) / [seaborn](https://seaborn.pydata.org/) for plotting or data visualization. # - [Scikit-Learn](https://scikit-learn.org/stable/) for machine learning modelling and evaluation. # Regular EDA (exploratory data analysis) and plotting libraries import pandas as pd import numpy as nb import matplotlib.pyplot as plt import seaborn as sb sb.set_style("ticks") # we want our plots to appear inside the notebook from imblearn.under_sampling import RandomUnderSampler from sklearn.preprocessing import OneHotEncoder # Linear Models from Scikit-Learn from sklearn.linear_model import LogisticRegression from sklearn.discriminant_analysis import LinearDiscriminantAnalysis # Non-linear Models from Scikit-Learn from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB # Ensemble Models from Scikit-Learn from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import GradientBoostingClassifier # Model Evaluations from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score from sklearn.metrics import auc, roc_auc_score, roc_curve # ## Load Data df = pd.read_csv("/kaggle/input/pndm-prediction-dataset/PNDB.csv") df.shape # (rows, columns) # ## Data Exploration (Exploratory Data Analysis - EDA) # The goal here is to find out more about the data and become a subject matter export on the dataset you're working with. # 1. What question(s) are you trying to solve? # 2. What kind of data do we have and how do we treat different types? # 3. What's missing from the data and how do you deal with it? # 4. Where are the outliers and why should you care about them? # 5. How can you add, change or remove features to get more out of your data? df.head() # Let's check if the dataset is balanced of not df["PNDM"].value_counts() df["PNDM"].value_counts().plot(kind="bar", color=["salmon", "lightblue"]) # ⚠️ It can be easily observed that the number of datapoints that are labeled as Not PNDM is much higher than the datapoints that are labeled as PNDM. Which means that the sample dataset we have is in highly imbalanced state. # > When we try to use a usual classifier to classify an imbalanced dataset, the model favors the majority class due to its larger volume presence. # That's why, we need to apply proper technique to handle this situation. df.info() df.isna().sum() # ✍🏼 As you can see on the above table, there is no null values in any of the columns. df.drop_duplicates().info() # ✍🏼 The dataset does not contain any duplicated data points. df.describe(include="all") # ### PNDM Frequency according to Genetic Info sb.countplot(data=df, x="PNDM", hue="Genetic Info") plt.title("PNDM Frequency for Genetic Info", fontsize=10) plt.ylabel("Amount") plt.xlabel("0 = No PNDM, 1 = PNDM") plt.show() # ✍🏼 It seems that there is strong positive correlation between having PNDM and genetic mutation. However even though a person has a genetic mutation it does not necessarily mean that PNDM will occur. # ### PNDM Frequency according to Family History sb.countplot(data=df, x="PNDM", hue="Family History") plt.title("PNDM Frequency for Family History", fontsize=10) plt.ylabel("Amount") plt.xlabel("0 = No PNDM, 1 = PNDM") plt.show() # ✍🏼 As per the above graph, it's hard to tell that having PNDM is genetic related **according to the dataset we have**. # ### PNDM Frequency according to Developmental Delay # Developmental delay refers to a condition in which a child does not reach developmental milestones at the expected age. It signifies a significant lag or delay in the acquisition of skills and abilities in areas such as physical, cognitive, communication, social, and emotional development. Developmental delay can be associated with PNDM in some cases. sb.countplot(data=df, x="PNDM", hue="Developmental Delay") plt.title("PNDM Frequency for Developmental Delay", fontsize=10) plt.ylabel("Amount") plt.xlabel("0 = No PNDM, 1 = PNDM") plt.show() # ✍🏼 Most of the cases in the sample dataset don't have development delay. It's hard to associate the cases that have development delay with the cases that have PNDM from the above graph. # ### Distribution of the Age of the Patients sb.histplot(data=df, x="Age", kde=True, hue="PNDM") plt.show() # ✍🏼 Ages are between 1 and 11. It seems that the we don't have any patient who is older than 4 years old with PNDM in the samepl dataset. It's better to treat it as categorical feature rather than numeric feature. # ### Distribution of the HbA1c Levels f, (ax_box, ax_hist) = plt.subplots( 2, sharex=False, gridspec_kw={"height_ratios": (0.15, 0.85)} ) sb.boxplot(data=df, ax=ax_box, x="HbA1c", color="lightblue") sb.histplot(data=df, x="HbA1c", hue="PNDM", kde=True, ax=ax_hist) ax_box.set(xlabel="") plt.show() # ✍🏼 As you can see on above image, HbA1c level follows normal distribution. Also, we should consider all HbA1c values which are less than 4 and more than 10 as outlier as per the above box plot. # ### Distribution of the Birth Weight f, (ax_box, ax_hist) = plt.subplots( 2, sharex=False, gridspec_kw={"height_ratios": (0.15, 0.85)} ) sb.boxplot(data=df, ax=ax_box, x="Birth Weight", color="lightblue") sb.histplot(data=df, x="Birth Weight", hue="PNDM", kde=True, ax=ax_hist) ax_box.set(xlabel="") plt.show() # ✍🏼 It can be clearly seen that for the datapoints w/o PNDM the Birth Weight feature follows a normal distribution. On the other hand, for the datapoints with PNDM the graph is left skewed. We might also say that, as per the collected data, if a person's birth weight is more the 3 kg, it's highly unlikely ot have PNDM for that person. # Additionally, all birth weights which are below 1.5 kg and above 4 kg should be considered as outliers. f, (ax_box, ax_hist) = plt.subplots( 2, sharex=False, gridspec_kw={"height_ratios": (0.15, 0.85)} ) sb.boxplot(data=df, x="Insulin Level", ax=ax_box, color="lightblue") sb.histplot(data=df, x="Insulin Level", hue="PNDM", kde=True, ax=ax_hist) ax_box.set(xlabel="") plt.show() # ✍🏼 Insulin level also follows normal distribution. We should consider insulin level values which are less than 0 and more than 10.2 as outlier values. # ### Correlation between independent variables corr_matrix = df.corr(numeric_only=True) plt.figure(figsize=(10, 8)) sb.heatmap( corr_matrix, annot=True, linewidths=0.5, fmt=".2f", cmap="YlGnBu", annot_kws={"fontsize": 16}, ) # ✍🏼 Above matrix indicates that non of the numerical feautres don't have correlation to each other. # ## Data Preprocessing # As indicated above there are some ourlier values in some of the features that need to be removed from the dataset. Additionaly we'll balance the dataset to have more reliable results. # >Before applying random under-sampling to balance the dataset, it's generally recommended to handle outliers. Outliers can affect the distribution of the data and the decision boundaries of the classification model, potentially leading to biased results. # ❗️ However, before applying any data preprocessing we need to split the dataset into train and test so it won't cause any data leakage. # ### Encoding Categorical Values # We'll encode categorical below mentioned features: # - Genetic Info # - Family History # - Developmental Delay cat_feature = ["Genetic Info", "Family History", "Developmental Delay"] dummy_df = pd.get_dummies(df[cat_feature]) df = pd.concat([df, dummy_df], axis=1) df = df.drop(cat_feature, axis=1) df.shape df.head() # ### Splitting the Dataset x = df.drop("PNDM", axis=1) y = df["PNDM"] x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.2, random_state=101 ) # ### Removing Outliers # As highlight in the EDA section all numerical features has outliers. The values of each feature should be between as listed in the below table. # |Min Value|Feature|Max Value| # |---------|-------|---------| # |4| HbA1c |10| # |1.5|Birth Weight|4| # |0|Insulin Level|10.2| df_clean = ( df[ (df["HbA1c"] >= 4) & (df["HbA1c"] <= 10) & (df["Birth Weight"] >= 1.5) & (df["Birth Weight"] <= 4) & (df["Insulin Level"] >= 0) & (df["Insulin Level"] <= 10.2) ] .reset_index() .drop("index", axis=1) ) df_clean.shape # ✍🏼 The number of rows in the initial dataset is 100,000. The number of rows after removing outliers is 97,410. # Now lets re-check the class distribution. df_clean["PNDM"].value_counts() df_clean["PNDM"].value_counts().plot(kind="bar", color=["salmon", "lightblue"]) plt.show() # ✍🏼 It shows that the ratio of the number of the datapoints with PNDM to the number of the datapoints without PNDM is almost the same after removing the outliers. Now we can apply Random Under Sampler to have a balanced dataset. df_clean.shape # ### Balancing the Dataset # We'll use `RandomUnderSampler` from `imblearn` library. We'll set `random_state` paraeter to `101` for reproducing perpose. Also, `sampling_strategy` will be set to `majority` which will undersample the majority class determined by the class with the largest number of examples. undersample = RandomUnderSampler(sampling_strategy="majority", random_state=101) x_under, y_under = undersample.fit_resample(x_train, y_train) print(x_under.shape) y_under.value_counts().plot(kind="bar", color=["salmon", "lightblue"]) plt.show() # ✍🏼 As you can see that in the training dataset we have the same number of datapoints which are labeled as 0 and 1. # ## Modeling # We've explored the data, now we'll try to use machine learning to predict our target variable based on the 7 independent variables. We'll create a function which evaluates the performance of several algorithms so we can call it repeatedly with different inputs such as by rescaling the features or eleminating features by their importance. We'll evaluate: # Linear Algorithms # 1. Logistic Regression (LR) # 2. Linear Discriminant Analysis (LDA) # Non-linear Algorithms # 3. Decision Tree Classifier (DT) # 4. $k$-Neighbors Classifier (KNN) # 5. Support Vector Classifier (SVC) # 6. Gaussian Naive Bayes (GNB) # Ensemble Algorithms # 7. Random Forest Classifier (RFC) # 8. AdaBoost Classifier (ABC) # 9. Gradient Boosting Classifier (GBC) # Based on the result we'll decide on the model, then we'll fine tune the hyperparameters. # def evaluate_algorithms(x, y): models = [] models.append(("LR", LogisticRegression(solver="liblinear", multi_class="ovr"))) models.append(("LDA", LinearDiscriminantAnalysis())) models.append(("DT", DecisionTreeClassifier())) models.append(("KNN", KNeighborsClassifier())) models.append(("SVC", SVC(gamma="auto"))) models.append(("GNB", GaussianNB())) models.append(("RFC", RandomForestClassifier())) models.append(("ABC", AdaBoostClassifier())) models.append(("GBC", GradientBoostingClassifier())) names = [] results = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=101, shuffle=True) cv_results = cross_val_score(model, x, y, cv=kfold, scoring="accuracy") results.append(cv_results) names.append(name) msg = "%s - Mean ACC: %.2f%% STD(%.2f)" % ( name, cv_results.mean() * 100, cv_results.std(), ) print(msg) # Plot the results fig = plt.figure(figsize=(8, 8)) fig.suptitle("Algorithm Comparison", fontsize=16, y=0.93) ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names, fontsize=14) plt.show() evaluate_algorithms(x_under, y_under) # ✍🏼 RFC Provides the best training accuracy among others. It's not a surprise when you consider the below process defined by scikit learn. # [Choosing the right estimator](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) # 99.95% accuracy score is above our expectations. That's why we'll skip some steps such as scaling, feature selection and hyperparameter tuning and jump to testing. It's time to see the performance of the RFC with test data. # ## Evaluating the Model # We'll evaluate the model using below metrics in addition to the accuracy score: # - ROC curve and AUC score # - Confusion matrix # - Classification report # Fit the model using under-sampled training data model = RandomForestClassifier() model.fit(x_under, y_under) # Evaluate the model y_hat = model.predict(x_test) # Print the accuracy score acc = accuracy_score(y_test, y_hat) * 100 print("Accuracy: {:.2f}%".format(acc)) # ✍🏼 The difference between the training and the testing accuracy is negligible. So we are safe to say that the model does not suffer from overfitting or underfitting. # Let's check the other metrics. # ### Classification Report # A classification report will also give us information of the precision and recall of our model for each class. # * **Precision** - Indicates the proportion of positive identifications (model predicted class 1) which were actually correct. A model which produces no false positives has a precision of 1.0. # * **Recall** - Indicates the proportion of actual positives which were correctly classified. A model which produces no false negatives has a recall of 1.0. # * **F1 score** - A combination of precision and recall. A perfect model achieves an F1 score of 1.0. # * **Support** - The number of samples each metric was calculated on. # * **Accuracy** - The accuracy of the model in decimal form. Perfect accuracy is equal to 1.0. # * **Macro avg** - Short for macro average, the average precision, recall and F1 score between classes. Macro avg doesn’t class imbalance into effort, so if you do have class imbalances, pay attention to this metric. # * **Weighted avg** - Short for weighted average, the weighted average precision, recall and F1 score between classes. Weighted means each metric is calculated with respect to how many samples there are in each class. This metric will favour the majority class (e.g. will give a high value when one class out performs another due to having more samples). # # Print classification report print(classification_report(y_test, y_hat)) # ### ROC Curve and AUC Scores # **ROC Curve:** The ROC curve is a graphical representation of the performance of a binary classification model. It plots the true positive rate (sensitivity or recall) against the false positive rate (1 - specificity) for different classification thresholds. The ROC curve provides insights into the trade-off between sensitivity and specificity and helps evaluate the model's performance across various threshold settings. # **AUC:** AUC refers to the Area Under the ROC Curve. It is a metric that quantifies the overall performance of a binary classification model. The AUC score ranges from 0 to 1, where a value of 1 indicates a perfect classifier, and a value of 0.5 represents a random classifier. The higher the AUC score, the better the model's ability to distinguish between positive and negative instances. AUC provides a single value to compare and rank different models or evaluate the performance of a single model. # In summary, the ROC curve illustrates the performance of a binary classifier by plotting the true positive rate against the false positive rate. The AUC score summarizes the overall performance of the classifier, representing the area under the ROC curve. # Import ROC curve function from metrics module from sklearn.metrics import RocCurveDisplay fpr, tpr, thresholds = roc_curve(y_test, y_hat) roc_auc = auc(fpr, tpr) display = RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, estimator_name="RFC") display.plot() plt.show() # ### Confusion Matrix # A confusion matrix is a table that summarizes the performance of a classification model by showing the counts of true positive, true negative, false positive, and false negative predictions. It provides insights into the model's accuracy, precision, recall, and other evaluation metrics based on the comparison between predicted and actual class labels. # Display the confusion matrix cm = confusion_matrix(y_test, y_hat) cm_df = pd.DataFrame(cm) plt.figure(figsize=(5, 4)) sb.heatmap(cm_df, annot=True, fmt="d", cmap="YlGnBu") plt.xlabel("Predictions", fontsize=12) plt.ylabel("Actual Values", fontsize=12) plt.show()
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/207/129207038.ipynb
pndm-prediction-dataset
slmsshk
[{"Id": 129207038, "ScriptId": 38412763, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 4523557, "CreationDate": "05/11/2023 20:30:01", "VersionNumber": 1.0, "Title": "PNDM-Prediction - 99.97% Accuracy", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 415.0, "LinesInsertedFromPrevious": 415.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 5}]
[{"Id": 185045985, "KernelVersionId": 129207038, "SourceDatasetVersionId": 5348272}]
[{"Id": 5348272, "DatasetId": 3105256, "DatasourceVersionId": 5421683, "CreatorUserId": 3372026, "LicenseName": "CC BY-SA 4.0", "CreationDate": "04/08/2023 17:19:52", "VersionNumber": 1.0, "Title": "PNDM Prediction Dataset", "Slug": "pndm-prediction-dataset", "Subtitle": "Challenging PNDM Prediction Dataset: Practice Advanced Data Science Skills.", "Description": "Unravel the mysteries of Permanent Neonatal Diabetes Mellitus (PNDM) and help doctors diagnose this rare but life-threatening condition earlier with our simulated PNDM prediction dataset. Inspired by real-world medical data and cutting-edge research, this comprehensive dataset includes six features that could help predict PNDM: age at diagnosis, HbA1c levels, genetic information, family history, clinical features, and laboratory data. But beware! Preprocessing the data presents many challenges, including handling missing values, outliers, class imbalance, and scaling and normalization issues. To tackle these challenges, we recommend using the latest data science tools and techniques, including feature selection, imputation, outlier detection, and scaling and normalization methods. Help advance medical research and save lives by exploring the complex world of PNDM with our challenging and exciting dataset.", "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 3105256, "CreatorUserId": 3372026, "OwnerUserId": 3372026.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 5348272.0, "CurrentDatasourceVersionId": 5421683.0, "ForumId": 3168525, "Type": 2, "CreationDate": "04/08/2023 17:19:52", "LastActivityDate": "04/08/2023", "TotalViews": 842, "TotalDownloads": 90, "TotalVotes": 12, "TotalKernels": 1}]
[{"Id": 3372026, "UserName": "slmsshk", "DisplayName": "Salem S.", "RegisterDate": "06/20/2019", "PerformanceTier": 1}]
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # # PNDM Prediction # This notebook looks into using various Python-based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has Permanent Neonatal Diabetes Mellitus (PNDM). # We're going to take the following approach: # 1. Problem definition # 2. Data # 3. Evaluation # 4. Modelling # ## Problem Definition # Predicting if a patient has a PNDM or not by using given clinical parameters is a binary classification problem. # ## Data # The sample dataset is taken from [Kaggle](https://www.kaggle.com/datasets/slmsshk/pndm-prediction-dataset). The dataset has 7 features to be used predicting the label which is PNDM columns in the dataset. You can see the data dictionary [here](./Data/README.md). # ## Evaluation # > If we can reach 95% accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project. # ⚠️ **Note:** Due to the nature of experimentation, the evaluation metric may change over time. # ## Import Libraries # We're going to use: # - [pandas](https://pandas.pydata.org/) for data analysis. # - [NumPy](https://numpy.org/) for numerical operations. # - [Matplotlib](https://matplotlib.org/) / [seaborn](https://seaborn.pydata.org/) for plotting or data visualization. # - [Scikit-Learn](https://scikit-learn.org/stable/) for machine learning modelling and evaluation. # Regular EDA (exploratory data analysis) and plotting libraries import pandas as pd import numpy as nb import matplotlib.pyplot as plt import seaborn as sb sb.set_style("ticks") # we want our plots to appear inside the notebook from imblearn.under_sampling import RandomUnderSampler from sklearn.preprocessing import OneHotEncoder # Linear Models from Scikit-Learn from sklearn.linear_model import LogisticRegression from sklearn.discriminant_analysis import LinearDiscriminantAnalysis # Non-linear Models from Scikit-Learn from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB # Ensemble Models from Scikit-Learn from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import GradientBoostingClassifier # Model Evaluations from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score from sklearn.metrics import auc, roc_auc_score, roc_curve # ## Load Data df = pd.read_csv("/kaggle/input/pndm-prediction-dataset/PNDB.csv") df.shape # (rows, columns) # ## Data Exploration (Exploratory Data Analysis - EDA) # The goal here is to find out more about the data and become a subject matter export on the dataset you're working with. # 1. What question(s) are you trying to solve? # 2. What kind of data do we have and how do we treat different types? # 3. What's missing from the data and how do you deal with it? # 4. Where are the outliers and why should you care about them? # 5. How can you add, change or remove features to get more out of your data? df.head() # Let's check if the dataset is balanced of not df["PNDM"].value_counts() df["PNDM"].value_counts().plot(kind="bar", color=["salmon", "lightblue"]) # ⚠️ It can be easily observed that the number of datapoints that are labeled as Not PNDM is much higher than the datapoints that are labeled as PNDM. Which means that the sample dataset we have is in highly imbalanced state. # > When we try to use a usual classifier to classify an imbalanced dataset, the model favors the majority class due to its larger volume presence. # That's why, we need to apply proper technique to handle this situation. df.info() df.isna().sum() # ✍🏼 As you can see on the above table, there is no null values in any of the columns. df.drop_duplicates().info() # ✍🏼 The dataset does not contain any duplicated data points. df.describe(include="all") # ### PNDM Frequency according to Genetic Info sb.countplot(data=df, x="PNDM", hue="Genetic Info") plt.title("PNDM Frequency for Genetic Info", fontsize=10) plt.ylabel("Amount") plt.xlabel("0 = No PNDM, 1 = PNDM") plt.show() # ✍🏼 It seems that there is strong positive correlation between having PNDM and genetic mutation. However even though a person has a genetic mutation it does not necessarily mean that PNDM will occur. # ### PNDM Frequency according to Family History sb.countplot(data=df, x="PNDM", hue="Family History") plt.title("PNDM Frequency for Family History", fontsize=10) plt.ylabel("Amount") plt.xlabel("0 = No PNDM, 1 = PNDM") plt.show() # ✍🏼 As per the above graph, it's hard to tell that having PNDM is genetic related **according to the dataset we have**. # ### PNDM Frequency according to Developmental Delay # Developmental delay refers to a condition in which a child does not reach developmental milestones at the expected age. It signifies a significant lag or delay in the acquisition of skills and abilities in areas such as physical, cognitive, communication, social, and emotional development. Developmental delay can be associated with PNDM in some cases. sb.countplot(data=df, x="PNDM", hue="Developmental Delay") plt.title("PNDM Frequency for Developmental Delay", fontsize=10) plt.ylabel("Amount") plt.xlabel("0 = No PNDM, 1 = PNDM") plt.show() # ✍🏼 Most of the cases in the sample dataset don't have development delay. It's hard to associate the cases that have development delay with the cases that have PNDM from the above graph. # ### Distribution of the Age of the Patients sb.histplot(data=df, x="Age", kde=True, hue="PNDM") plt.show() # ✍🏼 Ages are between 1 and 11. It seems that the we don't have any patient who is older than 4 years old with PNDM in the samepl dataset. It's better to treat it as categorical feature rather than numeric feature. # ### Distribution of the HbA1c Levels f, (ax_box, ax_hist) = plt.subplots( 2, sharex=False, gridspec_kw={"height_ratios": (0.15, 0.85)} ) sb.boxplot(data=df, ax=ax_box, x="HbA1c", color="lightblue") sb.histplot(data=df, x="HbA1c", hue="PNDM", kde=True, ax=ax_hist) ax_box.set(xlabel="") plt.show() # ✍🏼 As you can see on above image, HbA1c level follows normal distribution. Also, we should consider all HbA1c values which are less than 4 and more than 10 as outlier as per the above box plot. # ### Distribution of the Birth Weight f, (ax_box, ax_hist) = plt.subplots( 2, sharex=False, gridspec_kw={"height_ratios": (0.15, 0.85)} ) sb.boxplot(data=df, ax=ax_box, x="Birth Weight", color="lightblue") sb.histplot(data=df, x="Birth Weight", hue="PNDM", kde=True, ax=ax_hist) ax_box.set(xlabel="") plt.show() # ✍🏼 It can be clearly seen that for the datapoints w/o PNDM the Birth Weight feature follows a normal distribution. On the other hand, for the datapoints with PNDM the graph is left skewed. We might also say that, as per the collected data, if a person's birth weight is more the 3 kg, it's highly unlikely ot have PNDM for that person. # Additionally, all birth weights which are below 1.5 kg and above 4 kg should be considered as outliers. f, (ax_box, ax_hist) = plt.subplots( 2, sharex=False, gridspec_kw={"height_ratios": (0.15, 0.85)} ) sb.boxplot(data=df, x="Insulin Level", ax=ax_box, color="lightblue") sb.histplot(data=df, x="Insulin Level", hue="PNDM", kde=True, ax=ax_hist) ax_box.set(xlabel="") plt.show() # ✍🏼 Insulin level also follows normal distribution. We should consider insulin level values which are less than 0 and more than 10.2 as outlier values. # ### Correlation between independent variables corr_matrix = df.corr(numeric_only=True) plt.figure(figsize=(10, 8)) sb.heatmap( corr_matrix, annot=True, linewidths=0.5, fmt=".2f", cmap="YlGnBu", annot_kws={"fontsize": 16}, ) # ✍🏼 Above matrix indicates that non of the numerical feautres don't have correlation to each other. # ## Data Preprocessing # As indicated above there are some ourlier values in some of the features that need to be removed from the dataset. Additionaly we'll balance the dataset to have more reliable results. # >Before applying random under-sampling to balance the dataset, it's generally recommended to handle outliers. Outliers can affect the distribution of the data and the decision boundaries of the classification model, potentially leading to biased results. # ❗️ However, before applying any data preprocessing we need to split the dataset into train and test so it won't cause any data leakage. # ### Encoding Categorical Values # We'll encode categorical below mentioned features: # - Genetic Info # - Family History # - Developmental Delay cat_feature = ["Genetic Info", "Family History", "Developmental Delay"] dummy_df = pd.get_dummies(df[cat_feature]) df = pd.concat([df, dummy_df], axis=1) df = df.drop(cat_feature, axis=1) df.shape df.head() # ### Splitting the Dataset x = df.drop("PNDM", axis=1) y = df["PNDM"] x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.2, random_state=101 ) # ### Removing Outliers # As highlight in the EDA section all numerical features has outliers. The values of each feature should be between as listed in the below table. # |Min Value|Feature|Max Value| # |---------|-------|---------| # |4| HbA1c |10| # |1.5|Birth Weight|4| # |0|Insulin Level|10.2| df_clean = ( df[ (df["HbA1c"] >= 4) & (df["HbA1c"] <= 10) & (df["Birth Weight"] >= 1.5) & (df["Birth Weight"] <= 4) & (df["Insulin Level"] >= 0) & (df["Insulin Level"] <= 10.2) ] .reset_index() .drop("index", axis=1) ) df_clean.shape # ✍🏼 The number of rows in the initial dataset is 100,000. The number of rows after removing outliers is 97,410. # Now lets re-check the class distribution. df_clean["PNDM"].value_counts() df_clean["PNDM"].value_counts().plot(kind="bar", color=["salmon", "lightblue"]) plt.show() # ✍🏼 It shows that the ratio of the number of the datapoints with PNDM to the number of the datapoints without PNDM is almost the same after removing the outliers. Now we can apply Random Under Sampler to have a balanced dataset. df_clean.shape # ### Balancing the Dataset # We'll use `RandomUnderSampler` from `imblearn` library. We'll set `random_state` paraeter to `101` for reproducing perpose. Also, `sampling_strategy` will be set to `majority` which will undersample the majority class determined by the class with the largest number of examples. undersample = RandomUnderSampler(sampling_strategy="majority", random_state=101) x_under, y_under = undersample.fit_resample(x_train, y_train) print(x_under.shape) y_under.value_counts().plot(kind="bar", color=["salmon", "lightblue"]) plt.show() # ✍🏼 As you can see that in the training dataset we have the same number of datapoints which are labeled as 0 and 1. # ## Modeling # We've explored the data, now we'll try to use machine learning to predict our target variable based on the 7 independent variables. We'll create a function which evaluates the performance of several algorithms so we can call it repeatedly with different inputs such as by rescaling the features or eleminating features by their importance. We'll evaluate: # Linear Algorithms # 1. Logistic Regression (LR) # 2. Linear Discriminant Analysis (LDA) # Non-linear Algorithms # 3. Decision Tree Classifier (DT) # 4. $k$-Neighbors Classifier (KNN) # 5. Support Vector Classifier (SVC) # 6. Gaussian Naive Bayes (GNB) # Ensemble Algorithms # 7. Random Forest Classifier (RFC) # 8. AdaBoost Classifier (ABC) # 9. Gradient Boosting Classifier (GBC) # Based on the result we'll decide on the model, then we'll fine tune the hyperparameters. # def evaluate_algorithms(x, y): models = [] models.append(("LR", LogisticRegression(solver="liblinear", multi_class="ovr"))) models.append(("LDA", LinearDiscriminantAnalysis())) models.append(("DT", DecisionTreeClassifier())) models.append(("KNN", KNeighborsClassifier())) models.append(("SVC", SVC(gamma="auto"))) models.append(("GNB", GaussianNB())) models.append(("RFC", RandomForestClassifier())) models.append(("ABC", AdaBoostClassifier())) models.append(("GBC", GradientBoostingClassifier())) names = [] results = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=101, shuffle=True) cv_results = cross_val_score(model, x, y, cv=kfold, scoring="accuracy") results.append(cv_results) names.append(name) msg = "%s - Mean ACC: %.2f%% STD(%.2f)" % ( name, cv_results.mean() * 100, cv_results.std(), ) print(msg) # Plot the results fig = plt.figure(figsize=(8, 8)) fig.suptitle("Algorithm Comparison", fontsize=16, y=0.93) ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names, fontsize=14) plt.show() evaluate_algorithms(x_under, y_under) # ✍🏼 RFC Provides the best training accuracy among others. It's not a surprise when you consider the below process defined by scikit learn. # [Choosing the right estimator](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) # 99.95% accuracy score is above our expectations. That's why we'll skip some steps such as scaling, feature selection and hyperparameter tuning and jump to testing. It's time to see the performance of the RFC with test data. # ## Evaluating the Model # We'll evaluate the model using below metrics in addition to the accuracy score: # - ROC curve and AUC score # - Confusion matrix # - Classification report # Fit the model using under-sampled training data model = RandomForestClassifier() model.fit(x_under, y_under) # Evaluate the model y_hat = model.predict(x_test) # Print the accuracy score acc = accuracy_score(y_test, y_hat) * 100 print("Accuracy: {:.2f}%".format(acc)) # ✍🏼 The difference between the training and the testing accuracy is negligible. So we are safe to say that the model does not suffer from overfitting or underfitting. # Let's check the other metrics. # ### Classification Report # A classification report will also give us information of the precision and recall of our model for each class. # * **Precision** - Indicates the proportion of positive identifications (model predicted class 1) which were actually correct. A model which produces no false positives has a precision of 1.0. # * **Recall** - Indicates the proportion of actual positives which were correctly classified. A model which produces no false negatives has a recall of 1.0. # * **F1 score** - A combination of precision and recall. A perfect model achieves an F1 score of 1.0. # * **Support** - The number of samples each metric was calculated on. # * **Accuracy** - The accuracy of the model in decimal form. Perfect accuracy is equal to 1.0. # * **Macro avg** - Short for macro average, the average precision, recall and F1 score between classes. Macro avg doesn’t class imbalance into effort, so if you do have class imbalances, pay attention to this metric. # * **Weighted avg** - Short for weighted average, the weighted average precision, recall and F1 score between classes. Weighted means each metric is calculated with respect to how many samples there are in each class. This metric will favour the majority class (e.g. will give a high value when one class out performs another due to having more samples). # # Print classification report print(classification_report(y_test, y_hat)) # ### ROC Curve and AUC Scores # **ROC Curve:** The ROC curve is a graphical representation of the performance of a binary classification model. It plots the true positive rate (sensitivity or recall) against the false positive rate (1 - specificity) for different classification thresholds. The ROC curve provides insights into the trade-off between sensitivity and specificity and helps evaluate the model's performance across various threshold settings. # **AUC:** AUC refers to the Area Under the ROC Curve. It is a metric that quantifies the overall performance of a binary classification model. The AUC score ranges from 0 to 1, where a value of 1 indicates a perfect classifier, and a value of 0.5 represents a random classifier. The higher the AUC score, the better the model's ability to distinguish between positive and negative instances. AUC provides a single value to compare and rank different models or evaluate the performance of a single model. # In summary, the ROC curve illustrates the performance of a binary classifier by plotting the true positive rate against the false positive rate. The AUC score summarizes the overall performance of the classifier, representing the area under the ROC curve. # Import ROC curve function from metrics module from sklearn.metrics import RocCurveDisplay fpr, tpr, thresholds = roc_curve(y_test, y_hat) roc_auc = auc(fpr, tpr) display = RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, estimator_name="RFC") display.plot() plt.show() # ### Confusion Matrix # A confusion matrix is a table that summarizes the performance of a classification model by showing the counts of true positive, true negative, false positive, and false negative predictions. It provides insights into the model's accuracy, precision, recall, and other evaluation metrics based on the comparison between predicted and actual class labels. # Display the confusion matrix cm = confusion_matrix(y_test, y_hat) cm_df = pd.DataFrame(cm) plt.figure(figsize=(5, 4)) sb.heatmap(cm_df, annot=True, fmt="d", cmap="YlGnBu") plt.xlabel("Predictions", fontsize=12) plt.ylabel("Actual Values", fontsize=12) plt.show()
false
1
5,214
5
5,438
5,214
129207908
# # Executive Summary # This notebook displays information and important data that can be analyzed to determine the possibility of predicting a hit song. After analyzing the data, eliminating outliers, encoding qualitative data, and utilizing various visual models, it can be concluded that there is a slight possibility of predicting a hit song. # # Introduction # First we need to import all of our data. For the purposes of this experiment, we will be importing the following: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from numpy import array from numpy import argmax from pandas.core.internals.managers import create_block_manager_from_column_arrays import scipy.stats as stats from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score import warnings warnings.filterwarnings("ignore") # Then, we need to load the dataset that we will be analyzing. This dataset shows information regarding Spotify's hit songs from years 2000-2019. import pandas as pd hit_songs = pd.read_csv("songs_normalize.csv") hit_songs.head() # There are 19 columns shown in the dataset and 2000 entries listed. (there will be 18 later on when we clean up our data). We need to ensure that none of our necessary data is missing. After running the code below, we can concur that no important data is missing in any of the columns. We can also check the desciption of our columns by showing the mode, std, max, min, etc. hit_songs.info() print(hit_songs.isna().sum()) hit_songs.describe() # # Data Exploration & Visualization # This visualization (which can easily be swapped between variables to show all of them) shows the correlation between any 2 variables. The two shown below visualize the correlation between the 1. duration of a song and its popularity and 2. the tempo and its popularity. This can be used to identify which variables have the most impact on the overall popularity. # For example, the first plot shows that the bulk of the popularity seems to lie withing 20000 to 25000 on the duration axis, with the highest points landing closer to the 25000 mark. # Similarly, the second plot shows that the bulk of the popularity splits between 90 and 120. As a musician myself, I can verify that this lines up. The default tempo on my DAW system automatically sets the tempo at 120. That is a more upbeat song. It would track to lower the tempo for a ballad. # Setting data as variables x = np.array(hit_songs["duration_ms"]) y = np.array(hit_songs["popularity"]) # Create a scatter plot plt.scatter(x, y) # Set the plot title and axis labels plt.title("Scatter Visualization 1") plt.xlabel("Duration in Milliseconds") plt.ylabel("Popularity") # Show the plot plt.show() # Setting data as variables x = np.array(hit_songs["tempo"]) y = np.array(hit_songs["popularity"]) # Create a scatter plot plt.scatter(x, y) # Set the plot title and axis labels plt.title("Scatter Visualization 2") plt.xlabel("Tempo") plt.ylabel("Popularity") # Show the plot plt.show() # This visualization (which can easily be swapped between variables to show all of them) shows which keys are most popular. # While this can be useful, it is slightly less important than the aforementioned scatter plots. Bar plots in this instance are helpful to visualize levels (like key), but for some of our other categories, it wouldn't be quite as useful. We're including it nonetheless, because it can prove to be somewhat informative. # The pie chart is useful in columns with a limited number of potential variables. For a column like "key" where there are only a certain number of possible pieces, it can be very helpful. # Extract the data for the pie chart sizes = hit_songs["key"].value_counts().values labels = hit_songs["key"].value_counts().index # Create the pie chart fig, ax = plt.subplots() ax.pie(sizes, labels=labels, autopct="%1.1f%%", startangle=90) ax.axis("equal") # Equal aspect ratio ensures that pie is drawn as a circle. # Add a title ax.set_title("Visualization 4") # Show the plot plt.show() # Setting data as variables x = np.array(hit_songs["key"]) y = np.array(hit_songs["popularity"]) # create bar chart plt.bar(x, y) # add labels and title plt.xlabel("Key") plt.ylabel("Popularity") plt.title("Bar Visualization") # display the chart plt.show() # # Data Preparation # For the purpose of this experiment, we made a change to one of the columns. The 'genre' column had too many variables to encode properly. We created another column named 'primary genre' that only contains data up to the first comma in each row, so if one row has more than one genre attributed to it, only the first one (the primary genre) will be shown. The regular 'genre' column was omitted, and we were then able to encode the 'primary genre' along with our other qualitative data in order to properly analyze our dataset. # encode encoded_df = pd.get_dummies(hit_songs["genre"], prefix="genre") # concantenate df_encoded = pd.concat([hit_songs, encoded_df], axis=1) # drop df_encoded.drop("genre", axis=1, inplace=True) # create new column: 'Primary Genre' hit_songs["primary_genre"] = hit_songs["genre"].str.replace(",.*", "", regex=True) # display new column print(hit_songs["primary_genre"]) # encode encoded_df = pd.get_dummies(hit_songs["explicit"], prefix="explicit") # concantenate df_encoded = pd.concat([hit_songs, encoded_df], axis=1) # drop df_encoded.drop("explicit", axis=1, inplace=True) # print print(df_encoded) # From the box plot shown below, it can be determined that, while there are technically outliers in the instrumentalness, acousticness, and speechiness columns, they are not extreme. The most obscure outliers seemed to be in the duration_ms and loudness columns. I ran code to determine upper and lower limits for both columns by running 3 standard deviations above and below the mean. The data that fell outside that range are our extreme outliers. There weren't too many, so I removed that data, as it doesn't appear that it will be relevant to our question "What makes a hit song?" hit_songs.plot(kind="box", subplots=True, sharey=False, figsize=(20, 20)) plt.subplots_adjust(wspace=0.7) plt.show() print("Duration_MS Outliers") print("Standard deviation: ", hit_songs["duration_ms"].std()) print("Mean: ", hit_songs["duration_ms"].mean()) upper_limit = hit_songs["duration_ms"].mean() + 3 * hit_songs["duration_ms"].std() print("Upper Limit:", upper_limit) lower_limit = hit_songs["duration_ms"].mean() - 3 * hit_songs["duration_ms"].std() print("Lower Limit:", lower_limit) print("Loudness Outliers") print("Standard deviation: ", hit_songs["loudness"].std()) print("Mean: ", hit_songs["loudness"].mean()) upper_limit = hit_songs["loudness"].mean() + 3 * hit_songs["loudness"].std() print("Upper Limit:", upper_limit) lower_limit = hit_songs["loudness"].mean() - 3 * hit_songs["loudness"].std() print( "Lower Limit:", lower_limit, ) # calculate the upper and lowers limits for duartion_ms duration_upper_limit = ( hit_songs["duration_ms"].mean() + 3 * hit_songs["duration_ms"].std() ) duration_lower_limit = ( hit_songs["duration_ms"].mean() - 3 * hit_songs["duration_ms"].std() ) # Calculate the upper and lower limits for loudness loudness_upper_limit = hit_songs["loudness"].mean() + 3 * hit_songs["loudness"].std() loudness_lower_limit = hit_songs["loudness"].mean() - 3 * hit_songs["loudness"].std() # Remove the outliers from the duartion_ms hit_songs = hit_songs[ (hit_songs["duration_ms"] >= duration_lower_limit) & (hit_songs["duration_ms"] <= duration_upper_limit) ] # Remove the outliers from loudness column hit_songs = hit_songs[ (hit_songs["loudness"] >= loudness_lower_limit) & (hit_songs["loudness"] <= loudness_upper_limit) ] print(hit_songs) # # Data Modeling # For our model, we utilized most of our columns as input. We omitted the artist and song column first, because while it was needed to collect our required data, it wasn't pertanent to the overall result.When our data was still very scattered, we omitted the liveness, energy, mode, speechiness, and valence columns. They could be useful if we had an extremely large dataset to round out our research, but with only 2000 entries, these seem less important in regard to our overall question, "What makes a hit song?" # We chose a multiple linear regression model because we have many variables that are all very different, but that we predicted would ultimately work towards proving that a hit song can be predicted based on data from past hit songs. First we ran our train_test_split code to determine a test size. Then we ran our multiple linear regression model to identify our coefficients. hit_songs = pd.read_csv("songs_normalize.csv") hit_songs.insert(15, "primary_genre", True) X = hit_songs[ [ "duration_ms", "explicit", "year", "danceability", "key", "loudness", "acousticness", "instrumentalness", "tempo", "primary_genre", ] ] Y = hit_songs.popularity X_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size=0.25, random_state=0 ) lin_reg = LinearRegression() lin_reg.fit(X_train, Y_train) predictions = lin_reg.predict(X_test) print( f"Coefficients: {lin_reg.coef_} for ['duration_ms', 'explicit', 'year', 'danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'primary_genre']\n" ) # # Modeling Assumptions Satisfied # - There appears to be little to no multicollinearity between variables. # - There is a linear relationship between popularity and the other columns. # - There doesn't appear to be any auto-correlation. # - Homoscedascity is apparent (Residual plot will be displayed below.) # # Visualization and Interpretation of the Model # Here is our residual plot. We ran a linear regression on the data to predict popularity. Each data point on the graph is what our equation predicted the popularity would be versus the difference between the actual popularity and our prediction. Excluding outliers along the bottom of the graph, the predicted popularity appears to be mostly within 1 standard deviation of the actual popularity. This leads us to believe that our regression can give us a fairly accurate prediction of the popularity of future songs based on the data points we utilized in the creation of this model. # To briefly explain the units on the graph, the x axis is how many millions of views we predict a given song to have based on our equation, and the y axis is how many millions of views we are off from our prediction. residuals = Y_test - predictions ax = sns.residplot(x=predictions, y=residuals, scatter_kws={"s": 5}) ax.set_title("Residuals vs Fitted") ax.set_xlabel("Fitted values") ax.set_ylabel("Residuals") # # Evaluation of Model Strength # The MSE score is within acceptable ranges given the size of the sample pool, giving some credibility to it's strength. It's R2 score is slightly negative indicating a slightly poor performance in our model in explaining the variance in the data, which can most likely be attributed to the outliers along the bottom of the model. If those outliers were removed the R2 score would go up, however in the case that those are reasonable outliers (I have not run a test to determine this) it would negatively impact the strength of the model as we would be inputting a bias on our model to give a certain result. Overall, I would say that the model is moderately strong. mse = mean_squared_error(Y_test, predictions) r2 = r2_score(Y_test, predictions) # Print the evaluation metrics print("Mean Squared Error (MSE):", mse) print("R-squared (R2) Score:", r2)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/207/129207908.ipynb
null
null
[{"Id": 129207908, "ScriptId": 38413350, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13629268, "CreationDate": "05/11/2023 20:44:14", "VersionNumber": 1.0, "Title": "FinalReport1", "EvaluationDate": "05/11/2023", "IsChange": true, "TotalLines": 247.0, "LinesInsertedFromPrevious": 247.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
# # Executive Summary # This notebook displays information and important data that can be analyzed to determine the possibility of predicting a hit song. After analyzing the data, eliminating outliers, encoding qualitative data, and utilizing various visual models, it can be concluded that there is a slight possibility of predicting a hit song. # # Introduction # First we need to import all of our data. For the purposes of this experiment, we will be importing the following: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from numpy import array from numpy import argmax from pandas.core.internals.managers import create_block_manager_from_column_arrays import scipy.stats as stats from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score import warnings warnings.filterwarnings("ignore") # Then, we need to load the dataset that we will be analyzing. This dataset shows information regarding Spotify's hit songs from years 2000-2019. import pandas as pd hit_songs = pd.read_csv("songs_normalize.csv") hit_songs.head() # There are 19 columns shown in the dataset and 2000 entries listed. (there will be 18 later on when we clean up our data). We need to ensure that none of our necessary data is missing. After running the code below, we can concur that no important data is missing in any of the columns. We can also check the desciption of our columns by showing the mode, std, max, min, etc. hit_songs.info() print(hit_songs.isna().sum()) hit_songs.describe() # # Data Exploration & Visualization # This visualization (which can easily be swapped between variables to show all of them) shows the correlation between any 2 variables. The two shown below visualize the correlation between the 1. duration of a song and its popularity and 2. the tempo and its popularity. This can be used to identify which variables have the most impact on the overall popularity. # For example, the first plot shows that the bulk of the popularity seems to lie withing 20000 to 25000 on the duration axis, with the highest points landing closer to the 25000 mark. # Similarly, the second plot shows that the bulk of the popularity splits between 90 and 120. As a musician myself, I can verify that this lines up. The default tempo on my DAW system automatically sets the tempo at 120. That is a more upbeat song. It would track to lower the tempo for a ballad. # Setting data as variables x = np.array(hit_songs["duration_ms"]) y = np.array(hit_songs["popularity"]) # Create a scatter plot plt.scatter(x, y) # Set the plot title and axis labels plt.title("Scatter Visualization 1") plt.xlabel("Duration in Milliseconds") plt.ylabel("Popularity") # Show the plot plt.show() # Setting data as variables x = np.array(hit_songs["tempo"]) y = np.array(hit_songs["popularity"]) # Create a scatter plot plt.scatter(x, y) # Set the plot title and axis labels plt.title("Scatter Visualization 2") plt.xlabel("Tempo") plt.ylabel("Popularity") # Show the plot plt.show() # This visualization (which can easily be swapped between variables to show all of them) shows which keys are most popular. # While this can be useful, it is slightly less important than the aforementioned scatter plots. Bar plots in this instance are helpful to visualize levels (like key), but for some of our other categories, it wouldn't be quite as useful. We're including it nonetheless, because it can prove to be somewhat informative. # The pie chart is useful in columns with a limited number of potential variables. For a column like "key" where there are only a certain number of possible pieces, it can be very helpful. # Extract the data for the pie chart sizes = hit_songs["key"].value_counts().values labels = hit_songs["key"].value_counts().index # Create the pie chart fig, ax = plt.subplots() ax.pie(sizes, labels=labels, autopct="%1.1f%%", startangle=90) ax.axis("equal") # Equal aspect ratio ensures that pie is drawn as a circle. # Add a title ax.set_title("Visualization 4") # Show the plot plt.show() # Setting data as variables x = np.array(hit_songs["key"]) y = np.array(hit_songs["popularity"]) # create bar chart plt.bar(x, y) # add labels and title plt.xlabel("Key") plt.ylabel("Popularity") plt.title("Bar Visualization") # display the chart plt.show() # # Data Preparation # For the purpose of this experiment, we made a change to one of the columns. The 'genre' column had too many variables to encode properly. We created another column named 'primary genre' that only contains data up to the first comma in each row, so if one row has more than one genre attributed to it, only the first one (the primary genre) will be shown. The regular 'genre' column was omitted, and we were then able to encode the 'primary genre' along with our other qualitative data in order to properly analyze our dataset. # encode encoded_df = pd.get_dummies(hit_songs["genre"], prefix="genre") # concantenate df_encoded = pd.concat([hit_songs, encoded_df], axis=1) # drop df_encoded.drop("genre", axis=1, inplace=True) # create new column: 'Primary Genre' hit_songs["primary_genre"] = hit_songs["genre"].str.replace(",.*", "", regex=True) # display new column print(hit_songs["primary_genre"]) # encode encoded_df = pd.get_dummies(hit_songs["explicit"], prefix="explicit") # concantenate df_encoded = pd.concat([hit_songs, encoded_df], axis=1) # drop df_encoded.drop("explicit", axis=1, inplace=True) # print print(df_encoded) # From the box plot shown below, it can be determined that, while there are technically outliers in the instrumentalness, acousticness, and speechiness columns, they are not extreme. The most obscure outliers seemed to be in the duration_ms and loudness columns. I ran code to determine upper and lower limits for both columns by running 3 standard deviations above and below the mean. The data that fell outside that range are our extreme outliers. There weren't too many, so I removed that data, as it doesn't appear that it will be relevant to our question "What makes a hit song?" hit_songs.plot(kind="box", subplots=True, sharey=False, figsize=(20, 20)) plt.subplots_adjust(wspace=0.7) plt.show() print("Duration_MS Outliers") print("Standard deviation: ", hit_songs["duration_ms"].std()) print("Mean: ", hit_songs["duration_ms"].mean()) upper_limit = hit_songs["duration_ms"].mean() + 3 * hit_songs["duration_ms"].std() print("Upper Limit:", upper_limit) lower_limit = hit_songs["duration_ms"].mean() - 3 * hit_songs["duration_ms"].std() print("Lower Limit:", lower_limit) print("Loudness Outliers") print("Standard deviation: ", hit_songs["loudness"].std()) print("Mean: ", hit_songs["loudness"].mean()) upper_limit = hit_songs["loudness"].mean() + 3 * hit_songs["loudness"].std() print("Upper Limit:", upper_limit) lower_limit = hit_songs["loudness"].mean() - 3 * hit_songs["loudness"].std() print( "Lower Limit:", lower_limit, ) # calculate the upper and lowers limits for duartion_ms duration_upper_limit = ( hit_songs["duration_ms"].mean() + 3 * hit_songs["duration_ms"].std() ) duration_lower_limit = ( hit_songs["duration_ms"].mean() - 3 * hit_songs["duration_ms"].std() ) # Calculate the upper and lower limits for loudness loudness_upper_limit = hit_songs["loudness"].mean() + 3 * hit_songs["loudness"].std() loudness_lower_limit = hit_songs["loudness"].mean() - 3 * hit_songs["loudness"].std() # Remove the outliers from the duartion_ms hit_songs = hit_songs[ (hit_songs["duration_ms"] >= duration_lower_limit) & (hit_songs["duration_ms"] <= duration_upper_limit) ] # Remove the outliers from loudness column hit_songs = hit_songs[ (hit_songs["loudness"] >= loudness_lower_limit) & (hit_songs["loudness"] <= loudness_upper_limit) ] print(hit_songs) # # Data Modeling # For our model, we utilized most of our columns as input. We omitted the artist and song column first, because while it was needed to collect our required data, it wasn't pertanent to the overall result.When our data was still very scattered, we omitted the liveness, energy, mode, speechiness, and valence columns. They could be useful if we had an extremely large dataset to round out our research, but with only 2000 entries, these seem less important in regard to our overall question, "What makes a hit song?" # We chose a multiple linear regression model because we have many variables that are all very different, but that we predicted would ultimately work towards proving that a hit song can be predicted based on data from past hit songs. First we ran our train_test_split code to determine a test size. Then we ran our multiple linear regression model to identify our coefficients. hit_songs = pd.read_csv("songs_normalize.csv") hit_songs.insert(15, "primary_genre", True) X = hit_songs[ [ "duration_ms", "explicit", "year", "danceability", "key", "loudness", "acousticness", "instrumentalness", "tempo", "primary_genre", ] ] Y = hit_songs.popularity X_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size=0.25, random_state=0 ) lin_reg = LinearRegression() lin_reg.fit(X_train, Y_train) predictions = lin_reg.predict(X_test) print( f"Coefficients: {lin_reg.coef_} for ['duration_ms', 'explicit', 'year', 'danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'primary_genre']\n" ) # # Modeling Assumptions Satisfied # - There appears to be little to no multicollinearity between variables. # - There is a linear relationship between popularity and the other columns. # - There doesn't appear to be any auto-correlation. # - Homoscedascity is apparent (Residual plot will be displayed below.) # # Visualization and Interpretation of the Model # Here is our residual plot. We ran a linear regression on the data to predict popularity. Each data point on the graph is what our equation predicted the popularity would be versus the difference between the actual popularity and our prediction. Excluding outliers along the bottom of the graph, the predicted popularity appears to be mostly within 1 standard deviation of the actual popularity. This leads us to believe that our regression can give us a fairly accurate prediction of the popularity of future songs based on the data points we utilized in the creation of this model. # To briefly explain the units on the graph, the x axis is how many millions of views we predict a given song to have based on our equation, and the y axis is how many millions of views we are off from our prediction. residuals = Y_test - predictions ax = sns.residplot(x=predictions, y=residuals, scatter_kws={"s": 5}) ax.set_title("Residuals vs Fitted") ax.set_xlabel("Fitted values") ax.set_ylabel("Residuals") # # Evaluation of Model Strength # The MSE score is within acceptable ranges given the size of the sample pool, giving some credibility to it's strength. It's R2 score is slightly negative indicating a slightly poor performance in our model in explaining the variance in the data, which can most likely be attributed to the outliers along the bottom of the model. If those outliers were removed the R2 score would go up, however in the case that those are reasonable outliers (I have not run a test to determine this) it would negatively impact the strength of the model as we would be inputting a bias on our model to give a certain result. Overall, I would say that the model is moderately strong. mse = mean_squared_error(Y_test, predictions) r2 = r2_score(Y_test, predictions) # Print the evaluation metrics print("Mean Squared Error (MSE):", mse) print("R-squared (R2) Score:", r2)
false
0
3,257
0
3,257
3,257
129270087
import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error import numpy as np import os os.chdir("/kaggle/input/hp-supply-chain-optimization") # ## Data Analysis train_df = pd.read_csv("train.csv") train_df train_df = train_df.drop_duplicates(["id", "year_week", "product_number"]) train_df.date = pd.to_datetime(train_df.date) gd = train_df.groupby("date").sum() plt.plot(gd.index, gd.inventory_units) plt.xticks(rotation=45) plt.xlabel("Date") plt.ylabel("Inventory Units") plt.title("Inventory in time") plt.show() plt.plot(gd.index, gd.sales_units) plt.xticks(rotation=45) plt.xlabel("Date") plt.ylabel("Sales Units") plt.title("Sales in time") plt.show() gd = train_df.groupby(["date", "prod_category"]).sum().reset_index() products = gd.prod_category.unique() for prod in products: gd_prod = gd[gd.prod_category == prod] plt.plot(gd_prod.date, gd_prod.inventory_units) plt.show() gd = train_df.groupby(["date", "product_number"]).sum().reset_index() products = gd.product_number.unique() for prod in products: gd_prod = gd[gd.product_number == prod] plt.plot(gd_prod.date, gd_prod.inventory_units) plt.xticks(rotation=45) plt.show() gd = train_df.groupby(["date", "product_number"]).sum().reset_index() gd = gd[gd.product_number == 233919] products = gd.product_number.unique() for prod in products: gd_prod = gd[gd.product_number == prod] plt.plot(gd_prod.date, gd_prod.inventory_units) plt.xticks(rotation=45) plt.show() plt.hist(train_df.inventory_units) train_df # ## Model test from sklearn.linear_model import LinearRegression def prepare_train_df(df): df = df[["year_week", "product_number", "prod_category", "segment"]] df = pd.get_dummies(df, columns=["product_number", "segment", "prod_category"]) return df train_df_clean = train_df.dropna() X_train = prepare_df(train_df_clean) y_train = train_df_clean.inventory_units y_train lm_model = LinearRegression() lm_model.fit(X_train, y_train) y_pred = lm_model.predict(X_train) y_true = y_train rms = mean_squared_error(y_true, y_pred, squared=False) rms # ## Test Data Predictions pd.read_csv("sample_submission.csv") test_df = pd.read_csv("test.csv") test_df[["year_week", "product_number"]] = test_df.id.str.split("-", expand=True) test_df["product_number"] = test_df.product_number.astype(int) test_df product_mapping = train_df[ ["product_number", "prod_category", "segment", "specs", "display_size"] ].drop_duplicates() product_mapping test_df_complete = test_df.merge(product_mapping, on="product_number", how="left") test_df_complete X_test = prepare_train_df(test_df_complete) y_pred = lm_model.predict(X_test) submission = pd.DataFrame({"id": test_df_complete.id, "inventory_units": y_pred}) submission submission.to_csv("/kaggle/working/submission.csv") test_df
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/270/129270087.ipynb
null
null
[{"Id": 129270087, "ScriptId": 38433159, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1943036, "CreationDate": "05/12/2023 10:28:52", "VersionNumber": 1.0, "Title": "HP Dummy Model", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 141.0, "LinesInsertedFromPrevious": 141.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error import numpy as np import os os.chdir("/kaggle/input/hp-supply-chain-optimization") # ## Data Analysis train_df = pd.read_csv("train.csv") train_df train_df = train_df.drop_duplicates(["id", "year_week", "product_number"]) train_df.date = pd.to_datetime(train_df.date) gd = train_df.groupby("date").sum() plt.plot(gd.index, gd.inventory_units) plt.xticks(rotation=45) plt.xlabel("Date") plt.ylabel("Inventory Units") plt.title("Inventory in time") plt.show() plt.plot(gd.index, gd.sales_units) plt.xticks(rotation=45) plt.xlabel("Date") plt.ylabel("Sales Units") plt.title("Sales in time") plt.show() gd = train_df.groupby(["date", "prod_category"]).sum().reset_index() products = gd.prod_category.unique() for prod in products: gd_prod = gd[gd.prod_category == prod] plt.plot(gd_prod.date, gd_prod.inventory_units) plt.show() gd = train_df.groupby(["date", "product_number"]).sum().reset_index() products = gd.product_number.unique() for prod in products: gd_prod = gd[gd.product_number == prod] plt.plot(gd_prod.date, gd_prod.inventory_units) plt.xticks(rotation=45) plt.show() gd = train_df.groupby(["date", "product_number"]).sum().reset_index() gd = gd[gd.product_number == 233919] products = gd.product_number.unique() for prod in products: gd_prod = gd[gd.product_number == prod] plt.plot(gd_prod.date, gd_prod.inventory_units) plt.xticks(rotation=45) plt.show() plt.hist(train_df.inventory_units) train_df # ## Model test from sklearn.linear_model import LinearRegression def prepare_train_df(df): df = df[["year_week", "product_number", "prod_category", "segment"]] df = pd.get_dummies(df, columns=["product_number", "segment", "prod_category"]) return df train_df_clean = train_df.dropna() X_train = prepare_df(train_df_clean) y_train = train_df_clean.inventory_units y_train lm_model = LinearRegression() lm_model.fit(X_train, y_train) y_pred = lm_model.predict(X_train) y_true = y_train rms = mean_squared_error(y_true, y_pred, squared=False) rms # ## Test Data Predictions pd.read_csv("sample_submission.csv") test_df = pd.read_csv("test.csv") test_df[["year_week", "product_number"]] = test_df.id.str.split("-", expand=True) test_df["product_number"] = test_df.product_number.astype(int) test_df product_mapping = train_df[ ["product_number", "prod_category", "segment", "specs", "display_size"] ].drop_duplicates() product_mapping test_df_complete = test_df.merge(product_mapping, on="product_number", how="left") test_df_complete X_test = prepare_train_df(test_df_complete) y_pred = lm_model.predict(X_test) submission = pd.DataFrame({"id": test_df_complete.id, "inventory_units": y_pred}) submission submission.to_csv("/kaggle/working/submission.csv") test_df
false
0
974
0
974
974
129270333
<jupyter_start><jupyter_text>120 years of Olympic history: athletes and results ### Context This is a historical dataset on the modern Olympic Games, including all the Games from Athens 1896 to Rio 2016. I scraped this data from www.sports-reference.com in May 2018. The R code I used to [scrape](https://github.com/rgriff23/Olympic_history/blob/master/R/olympics%20scrape.R) and [wrangle](https://github.com/rgriff23/Olympic_history/blob/master/R/olympics%20wrangle.R) the data is on GitHub. I recommend checking [my kernel](https://www.kaggle.com/heesoo37/olympic-history-data-a-thorough-analysis) before starting your own analysis. Note that the Winter and Summer Games were held in the same year up until 1992. After that, they staggered them such that Winter Games occur on a four year cycle starting with 1994, then Summer in 1996, then Winter in 1998, and so on. A common mistake people make when analyzing this data is to assume that the Summer and Winter Games have always been staggered. ### Content The file athlete_events.csv contains 271116 rows and 15 columns. Each row corresponds to an individual athlete competing in an individual Olympic event (athlete-events). The columns are: 1. **ID** - Unique number for each athlete 2. **Name** - Athlete's name 3. **Sex** - M or F 4. **Age** - Integer 5. **Height** - In centimeters 6. **Weight** - In kilograms 7. **Team** - Team name 8. **NOC** - National Olympic Committee 3-letter code 9. **Games** - Year and season 10. **Year** - Integer 11. **Season** - Summer or Winter 12. **City** - Host city 13. **Sport** - Sport 14. **Event** - Event 15. **Medal** - Gold, Silver, Bronze, or NA Kaggle dataset identifier: 120-years-of-olympic-history-athletes-and-results <jupyter_code>import pandas as pd df = pd.read_csv('120-years-of-olympic-history-athletes-and-results/athlete_events.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 271116 entries, 0 to 271115 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ID 271116 non-null int64 1 Name 271116 non-null object 2 Sex 271116 non-null object 3 Age 261642 non-null float64 4 Height 210945 non-null float64 5 Weight 208241 non-null float64 6 Team 271116 non-null object 7 NOC 271116 non-null object 8 Games 271116 non-null object 9 Year 271116 non-null int64 10 Season 271116 non-null object 11 City 271116 non-null object 12 Sport 271116 non-null object 13 Event 271116 non-null object 14 Medal 39783 non-null object dtypes: float64(3), int64(2), object(10) memory usage: 31.0+ MB <jupyter_text>Examples: { "ID": 1, "Name": "A Dijiang", "Sex": "M", "Age": 24, "Height": 180.0, "Weight": 80.0, "Team": "China", "NOC": "CHN", "Games": "1992 Summer", "Year": 1992, "Season": "Summer", "City": "Barcelona", "Sport": "Basketball", "Event": "Basketball Men's Basketball", "Medal": null } { "ID": 2, "Name": "A Lamusi", "Sex": "M", "Age": 23, "Height": 170.0, "Weight": 60.0, "Team": "China", "NOC": "CHN", "Games": "2012 Summer", "Year": 2012, "Season": "Summer", "City": "London", "Sport": "Judo", "Event": "Judo Men's Extra-Lightweight", "Medal": null } { "ID": 3, "Name": "Gunnar Nielsen Aaby", "Sex": "M", "Age": 24, "Height": NaN, "Weight": NaN, "Team": "Denmark", "NOC": "DEN", "Games": "1920 Summer", "Year": 1920, "Season": "Summer", "City": "Antwerpen", "Sport": "Football", "Event": "Football Men's Football", "Medal": null } { "ID": 4, "Name": "Edgar Lindenau Aabye", "Sex": "M", "Age": 34, "Height": NaN, "Weight": NaN, "Team": "Denmark/Sweden", "NOC": "DEN", "Games": "1900 Summer", "Year": 1900, "Season": "Summer", "City": "Paris", "Sport": "Tug-Of-War", "Event": "Tug-Of-War Men's Tug-Of-War", "Medal": "Gold" } <jupyter_script># # Is it relevant to invest in the country hosting the Olympic Games , namely France in 2024? # ![phpyE1z7s.jpg](attachment:c563e467-72b7-4ceb-83ee-5a74994dc9a5.jpg) # ## Hypothesis: # ### Possible correlation between the number of medals won by a country and the fact of being a host country of the OG? # ### Can we say that the number of medals won by a country is higher when it hosts the OG? # # 1.Importing our libraries and datasets # Libraries used import numpy as np import pandas as pd import matplotlib.pyplot as plt # Upload Data df = pd.read_csv( "../input/120-years-of-olympic-history-athletes-and-results/athlete_events.csv" ) # Keeping only relevant Data df = df.drop( [ "ID", "Name", "Games", "Sex", "Age", "Height", "Weight", "Games", "Sport", "Event", "NOC", ], axis=1, ) df.info() # # 2.Data Cleaning # Replacing cities with countries and acronyms with countries df.replace("USA", "United States of America", inplace=True) df.replace("Tanzania", "United Republic of Tanzania", inplace=True) df.replace( "Democratic Republic of Congo", "Democratic Republic of the Congo", inplace=True ) df.replace("Congo", "Republic of the Congo", inplace=True) df.replace("Lao", "Laos", inplace=True) df.replace("Syrian Arab Republic", "Syria", inplace=True) df.replace("Serbia", "Republic of Serbia", inplace=True) df.replace("Czechia", "Czech Republic", inplace=True) df.replace("UAE", "United Arab Emirates", inplace=True) df.replace("UK", "United Kingdom", inplace=True) df.replace("Rio de Janeiro", "Brazil", inplace=True) df.replace("London", "United Kingdom", inplace=True) df.replace("Beijing", "China", inplace=True) df.replace("Athina", "Greece", inplace=True) df.replace(["Sydney", "Melbourne"], "Australia", inplace=True) df.replace( ["Atlanta", "Los Angeles", "St. Louis"], "United States of America", inplace=True ) df.replace("Barcelona", "Spain", inplace=True) df.replace("Seoul", "South Korea", inplace=True) df.replace("Moskva", "Russia", inplace=True) df.replace("Montreal", "Canada", inplace=True) df.replace(["Munich", "Berlin"], "Germany", inplace=True) df.replace("Mexico City", "Mexico", inplace=True) df.replace("Tokyo", "Japan", inplace=True) df.replace("Roma", "Italy", inplace=True) df.replace("Paris", "France", inplace=True) df.replace("Helsinki", "Finland", inplace=True) df.replace("Amsterdam", "Netherlands", inplace=True) df.replace("Antwerpen", "Belgium", inplace=True) df.replace("Stockholm", "Sweden", inplace=True) # Counting our medals df["Medal"] = df["Medal"].apply(lambda x: 1 if str(x) != "nan" else 0) # Add a collumn for when a country hosts and plays in the same olympic game df["Country_Hosting"] = df["City"] == df["Team"] # Transform our categorical variable into a boolean one df["Season"] = df["Season"].apply(lambda x: 1 if str(x) == "Summer" else 0) # # 3.Graphique des médailles gagnées par chaque pays selon les années def plot_medals_by_country(country, season): # Filter if season == 1: country_team = df[(df["Team"] == country) & (df["Season"] == 1)] elif season == 0: country_team = df[(df["Team"] == country) & (df["Season"] == 0)] # Grouper medals par by Year and Country_Hosting country_non_host = ( country_team.groupby(["Year", "Country_Hosting"]).sum()["Medal"].reset_index() ) country_host = country_non_host[(country_non_host["Country_Hosting"] == True)] # Plot for Medals by Years plt.plot( country_non_host["Year"], country_non_host["Medal"], "b", label="Pays juste joueur", ) plt.scatter( country_host["Year"], country_host["Medal"], c="r", label="Pays hebergeur et joueur", ) # Title and Axis plt.title("Medals won by " + country + " in the Olympic Games") plt.xlabel("Years") plt.ylabel("Medals") plt.legend() plt.show() # ### FOCUS on France when the country hosts and competes in the Olympic Games plot_medals_by_country("France", 1) # ### FOCUS on Spain when the country hosts and competes in the Olympic Games plot_medals_by_country("Spain", 1) # ### FOCUS on China when the country hosts and competes in the Olympic Games plot_medals_by_country("China", 1)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/270/129270333.ipynb
120-years-of-olympic-history-athletes-and-results
heesoo37
[{"Id": 129270333, "ScriptId": 36068282, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 10556995, "CreationDate": "05/12/2023 10:31:24", "VersionNumber": 16.0, "Title": "Olympic Games Medals Analysis", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 104.0, "LinesInsertedFromPrevious": 28.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 76.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 2}]
[{"Id": 185164040, "KernelVersionId": 129270333, "SourceDatasetVersionId": 40943}]
[{"Id": 40943, "DatasetId": 31029, "DatasourceVersionId": 43113, "CreatorUserId": 1966677, "LicenseName": "CC0: Public Domain", "CreationDate": "06/15/2018 06:10:41", "VersionNumber": 2.0, "Title": "120 years of Olympic history: athletes and results", "Slug": "120-years-of-olympic-history-athletes-and-results", "Subtitle": "basic bio data on athletes and medal results from Athens 1896 to Rio 2016", "Description": "### Context\n\nThis is a historical dataset on the modern Olympic Games, including all the Games from Athens 1896 to Rio 2016. I scraped this data from www.sports-reference.com in May 2018. The R code I used to [scrape](https://github.com/rgriff23/Olympic_history/blob/master/R/olympics%20scrape.R) and [wrangle](https://github.com/rgriff23/Olympic_history/blob/master/R/olympics%20wrangle.R) the data is on GitHub. I recommend checking [my kernel](https://www.kaggle.com/heesoo37/olympic-history-data-a-thorough-analysis) before starting your own analysis. \n\nNote that the Winter and Summer Games were held in the same year up until 1992. After that, they staggered them such that Winter Games occur on a four year cycle starting with 1994, then Summer in 1996, then Winter in 1998, and so on. A common mistake people make when analyzing this data is to assume that the Summer and Winter Games have always been staggered. \n\n### Content\n\nThe file athlete_events.csv contains 271116 rows and 15 columns. Each row corresponds to an individual athlete competing in an individual Olympic event (athlete-events). The columns are:\n\n1. **ID** - Unique number for each athlete\n2. **Name** - Athlete's name\n3. **Sex** - M or F\n4. **Age** - Integer\n5. **Height** - In centimeters\n6. **Weight** - In kilograms\n7. **Team** - Team name\n8. **NOC** - National Olympic Committee 3-letter code\n9. **Games** - Year and season\n10. **Year** - Integer\n11. **Season** - Summer or Winter\n12. **City** - Host city\n13. **Sport** - Sport\n14. **Event** - Event\n15. **Medal** - Gold, Silver, Bronze, or NA\n\n\n### Acknowledgements\n\nThe Olympic data on www.sports-reference.com is the result of an incredible amount of research by a group of Olympic history enthusiasts and self-proclaimed 'statistorians'. Check out their [blog](http://olympstats.com/) for more information. All I did was consolidated their decades of work into a convenient format for data analysis. \n\n### Inspiration\n\nThis dataset provides an opportunity to ask questions about how the Olympics have evolved over time, including questions about the participation and performance of women, different nations, and different sports and events.", "VersionNotes": "add file mapping NOCs to world map regions", "TotalCompressedBytes": 41504283.0, "TotalUncompressedBytes": 5692217.0}]
[{"Id": 31029, "CreatorUserId": 1966677, "OwnerUserId": 1966677.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 40943.0, "CurrentDatasourceVersionId": 43113.0, "ForumId": 39331, "Type": 2, "CreationDate": "06/11/2018 12:42:08", "LastActivityDate": "06/11/2018", "TotalViews": 676350, "TotalDownloads": 142036, "TotalVotes": 2039, "TotalKernels": 309}]
[{"Id": 1966677, "UserName": "heesoo37", "DisplayName": "rgriffin", "RegisterDate": "06/04/2018", "PerformanceTier": 1}]
# # Is it relevant to invest in the country hosting the Olympic Games , namely France in 2024? # ![phpyE1z7s.jpg](attachment:c563e467-72b7-4ceb-83ee-5a74994dc9a5.jpg) # ## Hypothesis: # ### Possible correlation between the number of medals won by a country and the fact of being a host country of the OG? # ### Can we say that the number of medals won by a country is higher when it hosts the OG? # # 1.Importing our libraries and datasets # Libraries used import numpy as np import pandas as pd import matplotlib.pyplot as plt # Upload Data df = pd.read_csv( "../input/120-years-of-olympic-history-athletes-and-results/athlete_events.csv" ) # Keeping only relevant Data df = df.drop( [ "ID", "Name", "Games", "Sex", "Age", "Height", "Weight", "Games", "Sport", "Event", "NOC", ], axis=1, ) df.info() # # 2.Data Cleaning # Replacing cities with countries and acronyms with countries df.replace("USA", "United States of America", inplace=True) df.replace("Tanzania", "United Republic of Tanzania", inplace=True) df.replace( "Democratic Republic of Congo", "Democratic Republic of the Congo", inplace=True ) df.replace("Congo", "Republic of the Congo", inplace=True) df.replace("Lao", "Laos", inplace=True) df.replace("Syrian Arab Republic", "Syria", inplace=True) df.replace("Serbia", "Republic of Serbia", inplace=True) df.replace("Czechia", "Czech Republic", inplace=True) df.replace("UAE", "United Arab Emirates", inplace=True) df.replace("UK", "United Kingdom", inplace=True) df.replace("Rio de Janeiro", "Brazil", inplace=True) df.replace("London", "United Kingdom", inplace=True) df.replace("Beijing", "China", inplace=True) df.replace("Athina", "Greece", inplace=True) df.replace(["Sydney", "Melbourne"], "Australia", inplace=True) df.replace( ["Atlanta", "Los Angeles", "St. Louis"], "United States of America", inplace=True ) df.replace("Barcelona", "Spain", inplace=True) df.replace("Seoul", "South Korea", inplace=True) df.replace("Moskva", "Russia", inplace=True) df.replace("Montreal", "Canada", inplace=True) df.replace(["Munich", "Berlin"], "Germany", inplace=True) df.replace("Mexico City", "Mexico", inplace=True) df.replace("Tokyo", "Japan", inplace=True) df.replace("Roma", "Italy", inplace=True) df.replace("Paris", "France", inplace=True) df.replace("Helsinki", "Finland", inplace=True) df.replace("Amsterdam", "Netherlands", inplace=True) df.replace("Antwerpen", "Belgium", inplace=True) df.replace("Stockholm", "Sweden", inplace=True) # Counting our medals df["Medal"] = df["Medal"].apply(lambda x: 1 if str(x) != "nan" else 0) # Add a collumn for when a country hosts and plays in the same olympic game df["Country_Hosting"] = df["City"] == df["Team"] # Transform our categorical variable into a boolean one df["Season"] = df["Season"].apply(lambda x: 1 if str(x) == "Summer" else 0) # # 3.Graphique des médailles gagnées par chaque pays selon les années def plot_medals_by_country(country, season): # Filter if season == 1: country_team = df[(df["Team"] == country) & (df["Season"] == 1)] elif season == 0: country_team = df[(df["Team"] == country) & (df["Season"] == 0)] # Grouper medals par by Year and Country_Hosting country_non_host = ( country_team.groupby(["Year", "Country_Hosting"]).sum()["Medal"].reset_index() ) country_host = country_non_host[(country_non_host["Country_Hosting"] == True)] # Plot for Medals by Years plt.plot( country_non_host["Year"], country_non_host["Medal"], "b", label="Pays juste joueur", ) plt.scatter( country_host["Year"], country_host["Medal"], c="r", label="Pays hebergeur et joueur", ) # Title and Axis plt.title("Medals won by " + country + " in the Olympic Games") plt.xlabel("Years") plt.ylabel("Medals") plt.legend() plt.show() # ### FOCUS on France when the country hosts and competes in the Olympic Games plot_medals_by_country("France", 1) # ### FOCUS on Spain when the country hosts and competes in the Olympic Games plot_medals_by_country("Spain", 1) # ### FOCUS on China when the country hosts and competes in the Olympic Games plot_medals_by_country("China", 1)
[{"120-years-of-olympic-history-athletes-and-results/athlete_events.csv": {"column_names": "[\"ID\", \"Name\", \"Sex\", \"Age\", \"Height\", \"Weight\", \"Team\", \"NOC\", \"Games\", \"Year\", \"Season\", \"City\", \"Sport\", \"Event\", \"Medal\"]", "column_data_types": "{\"ID\": \"int64\", \"Name\": \"object\", \"Sex\": \"object\", \"Age\": \"float64\", \"Height\": \"float64\", \"Weight\": \"float64\", \"Team\": \"object\", \"NOC\": \"object\", \"Games\": \"object\", \"Year\": \"int64\", \"Season\": \"object\", \"City\": \"object\", \"Sport\": \"object\", \"Event\": \"object\", \"Medal\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 271116 entries, 0 to 271115\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 ID 271116 non-null int64 \n 1 Name 271116 non-null object \n 2 Sex 271116 non-null object \n 3 Age 261642 non-null float64\n 4 Height 210945 non-null float64\n 5 Weight 208241 non-null float64\n 6 Team 271116 non-null object \n 7 NOC 271116 non-null object \n 8 Games 271116 non-null object \n 9 Year 271116 non-null int64 \n 10 Season 271116 non-null object \n 11 City 271116 non-null object \n 12 Sport 271116 non-null object \n 13 Event 271116 non-null object \n 14 Medal 39783 non-null object \ndtypes: float64(3), int64(2), object(10)\nmemory usage: 31.0+ MB\n", "summary": "{\"ID\": {\"count\": 271116.0, \"mean\": 68248.95439590434, \"std\": 39022.28634475647, \"min\": 1.0, \"25%\": 34643.0, \"50%\": 68205.0, \"75%\": 102097.25, \"max\": 135571.0}, \"Age\": {\"count\": 261642.0, \"mean\": 25.556898357297374, \"std\": 6.393560847035813, \"min\": 10.0, \"25%\": 21.0, \"50%\": 24.0, \"75%\": 28.0, \"max\": 97.0}, \"Height\": {\"count\": 210945.0, \"mean\": 175.33896987366376, \"std\": 10.518462222679224, \"min\": 127.0, \"25%\": 168.0, \"50%\": 175.0, \"75%\": 183.0, \"max\": 226.0}, \"Weight\": {\"count\": 208241.0, \"mean\": 70.70239290053351, \"std\": 14.348019999019392, \"min\": 25.0, \"25%\": 60.0, \"50%\": 70.0, \"75%\": 79.0, \"max\": 214.0}, \"Year\": {\"count\": 271116.0, \"mean\": 1978.3784800601957, \"std\": 29.877631985613423, \"min\": 1896.0, \"25%\": 1960.0, \"50%\": 1988.0, \"75%\": 2002.0, \"max\": 2016.0}}", "examples": "{\"ID\":{\"0\":1,\"1\":2,\"2\":3,\"3\":4},\"Name\":{\"0\":\"A Dijiang\",\"1\":\"A Lamusi\",\"2\":\"Gunnar Nielsen Aaby\",\"3\":\"Edgar Lindenau Aabye\"},\"Sex\":{\"0\":\"M\",\"1\":\"M\",\"2\":\"M\",\"3\":\"M\"},\"Age\":{\"0\":24.0,\"1\":23.0,\"2\":24.0,\"3\":34.0},\"Height\":{\"0\":180.0,\"1\":170.0,\"2\":null,\"3\":null},\"Weight\":{\"0\":80.0,\"1\":60.0,\"2\":null,\"3\":null},\"Team\":{\"0\":\"China\",\"1\":\"China\",\"2\":\"Denmark\",\"3\":\"Denmark\\/Sweden\"},\"NOC\":{\"0\":\"CHN\",\"1\":\"CHN\",\"2\":\"DEN\",\"3\":\"DEN\"},\"Games\":{\"0\":\"1992 Summer\",\"1\":\"2012 Summer\",\"2\":\"1920 Summer\",\"3\":\"1900 Summer\"},\"Year\":{\"0\":1992,\"1\":2012,\"2\":1920,\"3\":1900},\"Season\":{\"0\":\"Summer\",\"1\":\"Summer\",\"2\":\"Summer\",\"3\":\"Summer\"},\"City\":{\"0\":\"Barcelona\",\"1\":\"London\",\"2\":\"Antwerpen\",\"3\":\"Paris\"},\"Sport\":{\"0\":\"Basketball\",\"1\":\"Judo\",\"2\":\"Football\",\"3\":\"Tug-Of-War\"},\"Event\":{\"0\":\"Basketball Men's Basketball\",\"1\":\"Judo Men's Extra-Lightweight\",\"2\":\"Football Men's Football\",\"3\":\"Tug-Of-War Men's Tug-Of-War\"},\"Medal\":{\"0\":null,\"1\":null,\"2\":null,\"3\":\"Gold\"}}"}}]
true
1
<start_data_description><data_path>120-years-of-olympic-history-athletes-and-results/athlete_events.csv: <column_names> ['ID', 'Name', 'Sex', 'Age', 'Height', 'Weight', 'Team', 'NOC', 'Games', 'Year', 'Season', 'City', 'Sport', 'Event', 'Medal'] <column_types> {'ID': 'int64', 'Name': 'object', 'Sex': 'object', 'Age': 'float64', 'Height': 'float64', 'Weight': 'float64', 'Team': 'object', 'NOC': 'object', 'Games': 'object', 'Year': 'int64', 'Season': 'object', 'City': 'object', 'Sport': 'object', 'Event': 'object', 'Medal': 'object'} <dataframe_Summary> {'ID': {'count': 271116.0, 'mean': 68248.95439590434, 'std': 39022.28634475647, 'min': 1.0, '25%': 34643.0, '50%': 68205.0, '75%': 102097.25, 'max': 135571.0}, 'Age': {'count': 261642.0, 'mean': 25.556898357297374, 'std': 6.393560847035813, 'min': 10.0, '25%': 21.0, '50%': 24.0, '75%': 28.0, 'max': 97.0}, 'Height': {'count': 210945.0, 'mean': 175.33896987366376, 'std': 10.518462222679224, 'min': 127.0, '25%': 168.0, '50%': 175.0, '75%': 183.0, 'max': 226.0}, 'Weight': {'count': 208241.0, 'mean': 70.70239290053351, 'std': 14.348019999019392, 'min': 25.0, '25%': 60.0, '50%': 70.0, '75%': 79.0, 'max': 214.0}, 'Year': {'count': 271116.0, 'mean': 1978.3784800601957, 'std': 29.877631985613423, 'min': 1896.0, '25%': 1960.0, '50%': 1988.0, '75%': 2002.0, 'max': 2016.0}} <dataframe_info> RangeIndex: 271116 entries, 0 to 271115 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ID 271116 non-null int64 1 Name 271116 non-null object 2 Sex 271116 non-null object 3 Age 261642 non-null float64 4 Height 210945 non-null float64 5 Weight 208241 non-null float64 6 Team 271116 non-null object 7 NOC 271116 non-null object 8 Games 271116 non-null object 9 Year 271116 non-null int64 10 Season 271116 non-null object 11 City 271116 non-null object 12 Sport 271116 non-null object 13 Event 271116 non-null object 14 Medal 39783 non-null object dtypes: float64(3), int64(2), object(10) memory usage: 31.0+ MB <some_examples> {'ID': {'0': 1, '1': 2, '2': 3, '3': 4}, 'Name': {'0': 'A Dijiang', '1': 'A Lamusi', '2': 'Gunnar Nielsen Aaby', '3': 'Edgar Lindenau Aabye'}, 'Sex': {'0': 'M', '1': 'M', '2': 'M', '3': 'M'}, 'Age': {'0': 24.0, '1': 23.0, '2': 24.0, '3': 34.0}, 'Height': {'0': 180.0, '1': 170.0, '2': None, '3': None}, 'Weight': {'0': 80.0, '1': 60.0, '2': None, '3': None}, 'Team': {'0': 'China', '1': 'China', '2': 'Denmark', '3': 'Denmark/Sweden'}, 'NOC': {'0': 'CHN', '1': 'CHN', '2': 'DEN', '3': 'DEN'}, 'Games': {'0': '1992 Summer', '1': '2012 Summer', '2': '1920 Summer', '3': '1900 Summer'}, 'Year': {'0': 1992, '1': 2012, '2': 1920, '3': 1900}, 'Season': {'0': 'Summer', '1': 'Summer', '2': 'Summer', '3': 'Summer'}, 'City': {'0': 'Barcelona', '1': 'London', '2': 'Antwerpen', '3': 'Paris'}, 'Sport': {'0': 'Basketball', '1': 'Judo', '2': 'Football', '3': 'Tug-Of-War'}, 'Event': {'0': "Basketball Men's Basketball", '1': "Judo Men's Extra-Lightweight", '2': "Football Men's Football", '3': "Tug-Of-War Men's Tug-Of-War"}, 'Medal': {'0': None, '1': None, '2': None, '3': 'Gold'}} <end_description>
1,422
2
2,966
1,422
129270026
# # Visualizing Decision Boundaries of Various Classifiers on Artificial Datasets # Welcome to this notebook, where we will explore some common machine learning classifiers and plot decision boundaries made by them on 3 artificial datasets. This notebook has two primary objectives: # 1. Visualizing Decision Boundaries: # We aim to visualize the decision boundaries of several common machine learning classifiers. The decision boundary is the hypersurface that segregates different classes in the feature space, and its characteristics can tell us a lot about the classifier's performance and properties. # 2. Analyzing Overfitting and Underfitting: # Using the decision boundary visualizations, we will attempt to visually understand the phenomena of overfitting and underfitting. Overfitting and underfitting can greatly impact the performance of a machine learning model, and being able to visualize them can provide valuable insights for model tuning and selection. # Throughout this notebook, we will examine, compare, and draw insights from the behavior of different classifiers, including logistic regression, decision trees, support vector machines, k-nearest neighbours and random forests. Let's get started. # # Three Artificial Datasets # We're going to use three artificial datasets to test our models on. # Moons and circles are made using sklearn's datasets module, spirals is made using numpy. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import make_moons, make_circles from sklearn.preprocessing import MinMaxScaler def make_spirals(n_samples, noise, random_seed): np.random.seed(random_seed) # Generate the coordinates for the points in each spiral arm. n = np.sqrt(np.random.rand(n_samples, 1)) * 600 * (2 * np.pi) / 360 x1 = (-np.cos(n) * n + np.random.rand(n_samples, 1) * noise) / 7 y1 = (np.sin(n) * n + np.random.rand(n_samples, 1) * noise) / 7 return ( np.vstack((np.hstack((x1, y1)), np.hstack((-x1, -y1)))), np.hstack((np.zeros(n_samples), np.ones(n_samples))).astype(int), ) def scale_data(data): scaler = MinMaxScaler(feature_range=(-1, 1)) return scaler.fit_transform(data) X_moons, y_moons = make_moons(n_samples=100, noise=0.1, random_state=42) X_circles, y_circles = make_circles( n_samples=100, noise=0.075, factor=0.5, random_state=42 ) X_spirals, y_spirals = make_spirals(n_samples=100, noise=0.8, random_seed=42) datasets = { "Moons": (scale_data(X_moons), y_moons), "Circles": (scale_data(X_circles), y_circles), "Spirals": (scale_data(X_spirals), y_spirals), } # Make it pretty orange = "#FFA630" blue = "#00A7E1" palette = [orange, blue] plt.figure(figsize=(16, 5)) for i, (dataset_name, (X, y)) in enumerate(datasets.items()): plt.subplot(1, 3, i + 1) sns.scatterplot( x=X[:, 0], y=X[:, 1], hue=y, palette=palette, s=120, alpha=0.9, edgecolor=None ) plt.title(dataset_name) plt.show() # # Deploying Machine Learning Classifiers # Let's use a variety of sklearn classifying algorithms on these datasets with their default configurations to see how they do: from sklearn.metrics import accuracy_score def plot_multiple_decision_boundaries( classifier, datasets, resolution=0.015, alpha=0.09, palette="viridis", clf_name="" ): """ Plots a 1x3 grid of plots, displaying the decision boundaries of a given untrained classifier on three datasets. :classifier: An untrained sklearn classifier algorithm (e.g. DecisionTreeClassifier()) :datasets: A dictionary containing three datasets of the format key='dataset name', value=[X, y] """ n_datasets = len(datasets) fig, axes = plt.subplots( 1, n_datasets, figsize=(6 * n_datasets, 6) ) # Adjust the figure size as needed for ax, (name, (X, y)) in zip(axes, datasets.items()): # Fit the classifier classifier.fit(X, y) # Find training accuracy of classifier acc = accuracy_score(y, classifier.predict(X)) # Create a mesh of points dist_from_edge = 0.5 x_min, x_max = X[:, 0].min() - dist_from_edge, X[:, 0].max() + dist_from_edge y_min, y_max = X[:, 1].min() - dist_from_edge, X[:, 1].max() + dist_from_edge xx, yy = np.meshgrid( np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution) ) # Use the classifier to predict the class of each point in the mesh Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create a dataframe with the results df = pd.DataFrame(dict(x=xx.ravel(), y=yy.ravel(), label=Z.ravel())) if clf_name == "": clf_name = classifier.__class__.__name__ # Plot the results using seaborn sns.scatterplot( data=df, x="x", y="y", hue="label", palette=palette, alpha=alpha, legend=False, ax=ax, ) sns.scatterplot( x=X[:, 0], y=X[:, 1], s=60, hue=y, palette=palette, alpha=0.9, edgecolor=None, legend=False, ax=ax, ) ax.set_title(f"Decision Boundary for {name} - {clf_name}") ax.legend(title=f"Accuracy: {acc:.2f}", loc="upper right").set_bbox_to_anchor( (0.95, 0.95) ) plt.tight_layout() plt.show() from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier classifiers = [ LogisticRegression(), DecisionTreeClassifier(), SVC(), KNeighborsClassifier(), RandomForestClassifier(), ] for classifier in classifiers: plot_multiple_decision_boundaries(classifier, datasets, palette=palette) # ### Accuracy on training data vs accuracy on testing data # Several of these classifiers have achieved 100% accuracy on the training datasets we've given it. Let's observe how these classifiers fare when classifying test data: def plot_classifier_performance( classifier, datasets_train, datasets_test, resolution=0.015, alpha=0.09, palette="viridis", clf_name="", ): n_datasets = len(datasets_train) fig, axes = plt.subplots( 1, n_datasets, figsize=(6 * n_datasets, 6) ) # Adjust the figure size as needed # Pehaps the most pythonic for loop I have ever written for i, ( (name_train, (X_train, y_train)), (name_test, (X_test, y_test)), ) in enumerate(zip(datasets_train.items(), datasets_test.items())): # Fit the classifier classifier.fit(X_train, y_train) # Find training accuracy of classifier train_acc = accuracy_score(y_train, classifier.predict(X_train)) test_acc = accuracy_score(y_test, classifier.predict(X_test)) # Create a mesh of points dist_from_edge = 0.5 x_min, x_max = ( X_train[:, 0].min() - dist_from_edge, X_train[:, 0].max() + dist_from_edge, ) y_min, y_max = ( X_train[:, 1].min() - dist_from_edge, X_train[:, 1].max() + dist_from_edge, ) xx, yy = np.meshgrid( np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution) ) # Use the classifier to predict the class of each point in the mesh Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create a dataframe with the results df = pd.DataFrame(dict(x=xx.ravel(), y=yy.ravel(), label=Z.ravel())) if clf_name == "": clf_name = classifier.__class__.__name__ # Plot the results using seaborn sns.scatterplot( data=df, x="x", y="y", hue="label", palette=palette, alpha=alpha, legend=False, ax=axes[i], ) sns.scatterplot( x=X_test[:, 0], y=X_test[:, 1], s=60, hue=y_test, palette=palette, alpha=0.9, edgecolor=None, legend=False, ax=axes[i], ) # Find the misclassified points and plot them y_pred = classifier.predict(X_test) misclassified = y_test != y_pred sns.scatterplot( x=X_test[misclassified, 0], y=X_test[misclassified, 1], s=60, color="black", alpha=0.3, edgecolor=None, legend=False, ax=axes[i], ) axes[i].set_title(f"Decision Boundary for {name_train} - {clf_name}") axes[i].legend( title=f"Train acc: {train_acc:.3f}\nTest acc: {test_acc:.3f}", loc="upper right", ).set_bbox_to_anchor((0.95, 0.95)) plt.tight_layout() plt.show() X_moons_test, y_moons_test = make_moons(n_samples=500, noise=0.1, random_state=42) X_circles_test, y_circles_test = make_circles( n_samples=500, noise=0.075, factor=0.5, random_state=42 ) X_spirals_test, y_spirals_test = make_spirals(n_samples=500, noise=0.8, random_seed=42) datasets_test = { "Moons": (scale_data(X_moons_test), y_moons_test), "Circles": (scale_data(X_circles_test), y_circles_test), "Spirals": (scale_data(X_spirals_test), y_spirals_test), } # # Decision Tree Classifier plot_classifier_performance( DecisionTreeClassifier(), datasets_train=datasets, datasets_test=datasets_test, palette=palette, ) clf = DecisionTreeClassifier() clf.fit(X_spirals, y_spirals) print("Decision tree classifier for spirals data:") print(f"Tree depth: {clf.get_depth()}") print(f"Number of leaves: {clf.get_n_leaves()}") print(f"Number of nodes: {clf.tree_.node_count}") # ## Results # - The decision tree classifier scored 100 across the board on the training dataset, indicating it had fit the data well, however its accuracy has worsened on all testing datasets. # ### Moons dataset # - The classifier has underfit the dataset in some areas, and overfit it in other areas. # **Overfitting** # - The clearest overfitting is the band of orange running through at y=~0.5. This band caught an anomalous point in the training set, but in the testing set it's misclassifying several non-anomalous points. # - We can also see it's overfit on the horizontal boundaries running through y=~-0.25 and y=~0.25, where it's hugged the tips of each moon. This has resulted in some misclassification. # **Underfitting** # - The blocky shape of the decision boundary is a clear sign of underfitting, which could easily result in misclassifications of points in the corners of the blocks. # **Overall** # A classifier overfitting in some areas and underfitting in others is a clear sign of poor model selection. # ### Circles dataset # - Similar to the moons dataset, the model has overfit in some areas and underfit in others. # **Overfitting** # Misclassification of the points at the bottom of the inner circle due to the model hugging the points from the training data's inner circle is textbook overfitting, as the model has generalized poorly. # **Underfitting** # Again, the blocky shape of the decision boundary is an issue here, resulting in several misclassifications this time in the top left and right corners of the inner boundary. # **Overall** # Again, overfitting in some areas and underfitting in others indicates a poor choice of classifier. The classifier is extremely simple however. # ### Spirals dataset # - Despite the somewhat blocky shape, the decision tree classifier has done a pretty solid job on the spirals dataset. It has shown underfitting and overfitting in the same ways we saw in the previous two datasets, but to a far lesser extent. # **Overall** # The decision tree has proven itself a good classifier for this dataset, however the tree is also quite deep `depth=11` and has a lot of nodes `node_counts=33`, which is generally indicative of a tree that will overfit. # # Support Vector Machine Classifier plot_classifier_performance( SVC(), datasets_train=datasets, datasets_test=datasets_test, palette=palette ) # ## Results: # - The SVM classifier performed well on the moons and circles data for both training and testing datasets. # - It performed poorly on both the training and testing datasets for the spirals data. # ### Moons dataset # - The classifier's performance slightly improved from training to testing. This is most likely due to there being a few anomalous points in the training set that the classifer ignored that were not present in the testing dataset. # **Overfitting** # - The model has a very clean, smooth decision boundary, which is exactly the shape we want. This model is not overfitting. # **Underfitting** # - The decision boundary hugs the inner tips of the moons a little too much, indicating some underfitting. We should increase the complexity of the model a little bit. # **Overall** # The SVC looks like an excellent candidate model for this dataset. The decision boundary is smooth and clean, however it's underfitting, so we should try increasing the complexity. # ### Circles dataset # - Judging from the training data and plotted decision boundary, it would be fair to assume our SVM model found a perfect decision boundary, but it turned out there were points in the testing data that weren't represented in the training data. This indicates there may have been some overlap in the two classes. # **Overall** # This should have been an excellent classifier, there is no detectable underfitting or overfitting, the problem lies more in that the training data were not perfectly representing the population. # ### Spirals dataset # - The SVM model has not performed well on the spirals dataset and we're not going to discuss it, beyond that fact that it's significantly underfitting.. # # K-Neighbors Classifier # plot_classifier_performance( KNeighborsClassifier(), datasets_train=datasets, datasets_test=datasets_test, palette=palette, ) # ## Results: # - K-neighbors performed the best overall of all the classifiers we tried. The decision boundaries are not as smooth as those produced by the SVM, but they are generally placed better. # ### Moons dataset # - The moons classifier scored perfect. The decision boundary is a little rough, but the placement is pretty spot on. # **Overfitting** # - The model has overfit very slightly, presented by the rough decision boundary, but generally it's very good. # **Underfitting** # - The model has scored perfectly and shows no underfitting. # **Overall** # An excellent classifier, the only way it could be improved is by smoothing out the decision boundary, which could be done by reducing the number of neighbours potentially. # ### Circles dataset # - The classifier has performed similarly to the SVM model, only the decision boundary is positioned a little worse. # **Overfitting** # This model slightly overfit similarly to how it overfit in the first instance. Let's not prattle on. # **Overall** # A great classifier, held back a little by the rough decision boundary and training data seeming to not be perfectly representative of the testing data. # ### Spirals dataset # - A solid model, again with a rough decision boundary but it seems sufficiently complex. The boundary could be a little more centered. # **Overall** # The K-neighbors classifier fit the spirals data almost perfectly. KNN is a great classifier for discovering non-linear patterns, providing there are enough points. # # Random Forest Classifier # plot_classifier_performance( RandomForestClassifier(), datasets_train=datasets, datasets_test=datasets_test, palette=palette, )
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/270/129270026.ipynb
null
null
[{"Id": 129270026, "ScriptId": 38417681, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11239747, "CreationDate": "05/12/2023 10:28:14", "VersionNumber": 1.0, "Title": "\ud83d\udd0e Visualizing Decision Boundaries", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 330.0, "LinesInsertedFromPrevious": 330.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 16}]
null
null
null
null
# # Visualizing Decision Boundaries of Various Classifiers on Artificial Datasets # Welcome to this notebook, where we will explore some common machine learning classifiers and plot decision boundaries made by them on 3 artificial datasets. This notebook has two primary objectives: # 1. Visualizing Decision Boundaries: # We aim to visualize the decision boundaries of several common machine learning classifiers. The decision boundary is the hypersurface that segregates different classes in the feature space, and its characteristics can tell us a lot about the classifier's performance and properties. # 2. Analyzing Overfitting and Underfitting: # Using the decision boundary visualizations, we will attempt to visually understand the phenomena of overfitting and underfitting. Overfitting and underfitting can greatly impact the performance of a machine learning model, and being able to visualize them can provide valuable insights for model tuning and selection. # Throughout this notebook, we will examine, compare, and draw insights from the behavior of different classifiers, including logistic regression, decision trees, support vector machines, k-nearest neighbours and random forests. Let's get started. # # Three Artificial Datasets # We're going to use three artificial datasets to test our models on. # Moons and circles are made using sklearn's datasets module, spirals is made using numpy. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import make_moons, make_circles from sklearn.preprocessing import MinMaxScaler def make_spirals(n_samples, noise, random_seed): np.random.seed(random_seed) # Generate the coordinates for the points in each spiral arm. n = np.sqrt(np.random.rand(n_samples, 1)) * 600 * (2 * np.pi) / 360 x1 = (-np.cos(n) * n + np.random.rand(n_samples, 1) * noise) / 7 y1 = (np.sin(n) * n + np.random.rand(n_samples, 1) * noise) / 7 return ( np.vstack((np.hstack((x1, y1)), np.hstack((-x1, -y1)))), np.hstack((np.zeros(n_samples), np.ones(n_samples))).astype(int), ) def scale_data(data): scaler = MinMaxScaler(feature_range=(-1, 1)) return scaler.fit_transform(data) X_moons, y_moons = make_moons(n_samples=100, noise=0.1, random_state=42) X_circles, y_circles = make_circles( n_samples=100, noise=0.075, factor=0.5, random_state=42 ) X_spirals, y_spirals = make_spirals(n_samples=100, noise=0.8, random_seed=42) datasets = { "Moons": (scale_data(X_moons), y_moons), "Circles": (scale_data(X_circles), y_circles), "Spirals": (scale_data(X_spirals), y_spirals), } # Make it pretty orange = "#FFA630" blue = "#00A7E1" palette = [orange, blue] plt.figure(figsize=(16, 5)) for i, (dataset_name, (X, y)) in enumerate(datasets.items()): plt.subplot(1, 3, i + 1) sns.scatterplot( x=X[:, 0], y=X[:, 1], hue=y, palette=palette, s=120, alpha=0.9, edgecolor=None ) plt.title(dataset_name) plt.show() # # Deploying Machine Learning Classifiers # Let's use a variety of sklearn classifying algorithms on these datasets with their default configurations to see how they do: from sklearn.metrics import accuracy_score def plot_multiple_decision_boundaries( classifier, datasets, resolution=0.015, alpha=0.09, palette="viridis", clf_name="" ): """ Plots a 1x3 grid of plots, displaying the decision boundaries of a given untrained classifier on three datasets. :classifier: An untrained sklearn classifier algorithm (e.g. DecisionTreeClassifier()) :datasets: A dictionary containing three datasets of the format key='dataset name', value=[X, y] """ n_datasets = len(datasets) fig, axes = plt.subplots( 1, n_datasets, figsize=(6 * n_datasets, 6) ) # Adjust the figure size as needed for ax, (name, (X, y)) in zip(axes, datasets.items()): # Fit the classifier classifier.fit(X, y) # Find training accuracy of classifier acc = accuracy_score(y, classifier.predict(X)) # Create a mesh of points dist_from_edge = 0.5 x_min, x_max = X[:, 0].min() - dist_from_edge, X[:, 0].max() + dist_from_edge y_min, y_max = X[:, 1].min() - dist_from_edge, X[:, 1].max() + dist_from_edge xx, yy = np.meshgrid( np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution) ) # Use the classifier to predict the class of each point in the mesh Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create a dataframe with the results df = pd.DataFrame(dict(x=xx.ravel(), y=yy.ravel(), label=Z.ravel())) if clf_name == "": clf_name = classifier.__class__.__name__ # Plot the results using seaborn sns.scatterplot( data=df, x="x", y="y", hue="label", palette=palette, alpha=alpha, legend=False, ax=ax, ) sns.scatterplot( x=X[:, 0], y=X[:, 1], s=60, hue=y, palette=palette, alpha=0.9, edgecolor=None, legend=False, ax=ax, ) ax.set_title(f"Decision Boundary for {name} - {clf_name}") ax.legend(title=f"Accuracy: {acc:.2f}", loc="upper right").set_bbox_to_anchor( (0.95, 0.95) ) plt.tight_layout() plt.show() from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier classifiers = [ LogisticRegression(), DecisionTreeClassifier(), SVC(), KNeighborsClassifier(), RandomForestClassifier(), ] for classifier in classifiers: plot_multiple_decision_boundaries(classifier, datasets, palette=palette) # ### Accuracy on training data vs accuracy on testing data # Several of these classifiers have achieved 100% accuracy on the training datasets we've given it. Let's observe how these classifiers fare when classifying test data: def plot_classifier_performance( classifier, datasets_train, datasets_test, resolution=0.015, alpha=0.09, palette="viridis", clf_name="", ): n_datasets = len(datasets_train) fig, axes = plt.subplots( 1, n_datasets, figsize=(6 * n_datasets, 6) ) # Adjust the figure size as needed # Pehaps the most pythonic for loop I have ever written for i, ( (name_train, (X_train, y_train)), (name_test, (X_test, y_test)), ) in enumerate(zip(datasets_train.items(), datasets_test.items())): # Fit the classifier classifier.fit(X_train, y_train) # Find training accuracy of classifier train_acc = accuracy_score(y_train, classifier.predict(X_train)) test_acc = accuracy_score(y_test, classifier.predict(X_test)) # Create a mesh of points dist_from_edge = 0.5 x_min, x_max = ( X_train[:, 0].min() - dist_from_edge, X_train[:, 0].max() + dist_from_edge, ) y_min, y_max = ( X_train[:, 1].min() - dist_from_edge, X_train[:, 1].max() + dist_from_edge, ) xx, yy = np.meshgrid( np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution) ) # Use the classifier to predict the class of each point in the mesh Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create a dataframe with the results df = pd.DataFrame(dict(x=xx.ravel(), y=yy.ravel(), label=Z.ravel())) if clf_name == "": clf_name = classifier.__class__.__name__ # Plot the results using seaborn sns.scatterplot( data=df, x="x", y="y", hue="label", palette=palette, alpha=alpha, legend=False, ax=axes[i], ) sns.scatterplot( x=X_test[:, 0], y=X_test[:, 1], s=60, hue=y_test, palette=palette, alpha=0.9, edgecolor=None, legend=False, ax=axes[i], ) # Find the misclassified points and plot them y_pred = classifier.predict(X_test) misclassified = y_test != y_pred sns.scatterplot( x=X_test[misclassified, 0], y=X_test[misclassified, 1], s=60, color="black", alpha=0.3, edgecolor=None, legend=False, ax=axes[i], ) axes[i].set_title(f"Decision Boundary for {name_train} - {clf_name}") axes[i].legend( title=f"Train acc: {train_acc:.3f}\nTest acc: {test_acc:.3f}", loc="upper right", ).set_bbox_to_anchor((0.95, 0.95)) plt.tight_layout() plt.show() X_moons_test, y_moons_test = make_moons(n_samples=500, noise=0.1, random_state=42) X_circles_test, y_circles_test = make_circles( n_samples=500, noise=0.075, factor=0.5, random_state=42 ) X_spirals_test, y_spirals_test = make_spirals(n_samples=500, noise=0.8, random_seed=42) datasets_test = { "Moons": (scale_data(X_moons_test), y_moons_test), "Circles": (scale_data(X_circles_test), y_circles_test), "Spirals": (scale_data(X_spirals_test), y_spirals_test), } # # Decision Tree Classifier plot_classifier_performance( DecisionTreeClassifier(), datasets_train=datasets, datasets_test=datasets_test, palette=palette, ) clf = DecisionTreeClassifier() clf.fit(X_spirals, y_spirals) print("Decision tree classifier for spirals data:") print(f"Tree depth: {clf.get_depth()}") print(f"Number of leaves: {clf.get_n_leaves()}") print(f"Number of nodes: {clf.tree_.node_count}") # ## Results # - The decision tree classifier scored 100 across the board on the training dataset, indicating it had fit the data well, however its accuracy has worsened on all testing datasets. # ### Moons dataset # - The classifier has underfit the dataset in some areas, and overfit it in other areas. # **Overfitting** # - The clearest overfitting is the band of orange running through at y=~0.5. This band caught an anomalous point in the training set, but in the testing set it's misclassifying several non-anomalous points. # - We can also see it's overfit on the horizontal boundaries running through y=~-0.25 and y=~0.25, where it's hugged the tips of each moon. This has resulted in some misclassification. # **Underfitting** # - The blocky shape of the decision boundary is a clear sign of underfitting, which could easily result in misclassifications of points in the corners of the blocks. # **Overall** # A classifier overfitting in some areas and underfitting in others is a clear sign of poor model selection. # ### Circles dataset # - Similar to the moons dataset, the model has overfit in some areas and underfit in others. # **Overfitting** # Misclassification of the points at the bottom of the inner circle due to the model hugging the points from the training data's inner circle is textbook overfitting, as the model has generalized poorly. # **Underfitting** # Again, the blocky shape of the decision boundary is an issue here, resulting in several misclassifications this time in the top left and right corners of the inner boundary. # **Overall** # Again, overfitting in some areas and underfitting in others indicates a poor choice of classifier. The classifier is extremely simple however. # ### Spirals dataset # - Despite the somewhat blocky shape, the decision tree classifier has done a pretty solid job on the spirals dataset. It has shown underfitting and overfitting in the same ways we saw in the previous two datasets, but to a far lesser extent. # **Overall** # The decision tree has proven itself a good classifier for this dataset, however the tree is also quite deep `depth=11` and has a lot of nodes `node_counts=33`, which is generally indicative of a tree that will overfit. # # Support Vector Machine Classifier plot_classifier_performance( SVC(), datasets_train=datasets, datasets_test=datasets_test, palette=palette ) # ## Results: # - The SVM classifier performed well on the moons and circles data for both training and testing datasets. # - It performed poorly on both the training and testing datasets for the spirals data. # ### Moons dataset # - The classifier's performance slightly improved from training to testing. This is most likely due to there being a few anomalous points in the training set that the classifer ignored that were not present in the testing dataset. # **Overfitting** # - The model has a very clean, smooth decision boundary, which is exactly the shape we want. This model is not overfitting. # **Underfitting** # - The decision boundary hugs the inner tips of the moons a little too much, indicating some underfitting. We should increase the complexity of the model a little bit. # **Overall** # The SVC looks like an excellent candidate model for this dataset. The decision boundary is smooth and clean, however it's underfitting, so we should try increasing the complexity. # ### Circles dataset # - Judging from the training data and plotted decision boundary, it would be fair to assume our SVM model found a perfect decision boundary, but it turned out there were points in the testing data that weren't represented in the training data. This indicates there may have been some overlap in the two classes. # **Overall** # This should have been an excellent classifier, there is no detectable underfitting or overfitting, the problem lies more in that the training data were not perfectly representing the population. # ### Spirals dataset # - The SVM model has not performed well on the spirals dataset and we're not going to discuss it, beyond that fact that it's significantly underfitting.. # # K-Neighbors Classifier # plot_classifier_performance( KNeighborsClassifier(), datasets_train=datasets, datasets_test=datasets_test, palette=palette, ) # ## Results: # - K-neighbors performed the best overall of all the classifiers we tried. The decision boundaries are not as smooth as those produced by the SVM, but they are generally placed better. # ### Moons dataset # - The moons classifier scored perfect. The decision boundary is a little rough, but the placement is pretty spot on. # **Overfitting** # - The model has overfit very slightly, presented by the rough decision boundary, but generally it's very good. # **Underfitting** # - The model has scored perfectly and shows no underfitting. # **Overall** # An excellent classifier, the only way it could be improved is by smoothing out the decision boundary, which could be done by reducing the number of neighbours potentially. # ### Circles dataset # - The classifier has performed similarly to the SVM model, only the decision boundary is positioned a little worse. # **Overfitting** # This model slightly overfit similarly to how it overfit in the first instance. Let's not prattle on. # **Overall** # A great classifier, held back a little by the rough decision boundary and training data seeming to not be perfectly representative of the testing data. # ### Spirals dataset # - A solid model, again with a rough decision boundary but it seems sufficiently complex. The boundary could be a little more centered. # **Overall** # The K-neighbors classifier fit the spirals data almost perfectly. KNN is a great classifier for discovering non-linear patterns, providing there are enough points. # # Random Forest Classifier # plot_classifier_performance( RandomForestClassifier(), datasets_train=datasets, datasets_test=datasets_test, palette=palette, )
false
0
4,324
16
4,324
4,324
129270013
# import numpy as np # linear algebra # import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory # import os # for dirname, _, filenames in os.walk('/kaggle/input'): # for filename in filenames: # print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session import os import pandas as pd file = "/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/test/tdcsfog/003f117e14.csv" data = pd.read_csv(file) data.plot(subplots=True) data.describe() res = pd.DataFrame(columns=["H", "T", "W"]) dir = "/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/tdcsfog" for file in os.listdir(dir): path = os.path.join(dir, file) df = pd.read_csv(path) res.loc[file] = [df.StartHesitation.sum(), df.Turn.sum(), df.Walking.sum()] res.plot() res.astype(bool).sum(axis=0).plot(kind="bar")
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/270/129270013.ipynb
null
null
[{"Id": 129270013, "ScriptId": 37856761, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6091009, "CreationDate": "05/12/2023 10:28:05", "VersionNumber": 1.0, "Title": "Data exploration", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 43.0, "LinesInsertedFromPrevious": 43.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
null
null
null
null
# import numpy as np # linear algebra # import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory # import os # for dirname, _, filenames in os.walk('/kaggle/input'): # for filename in filenames: # print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session import os import pandas as pd file = "/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/test/tdcsfog/003f117e14.csv" data = pd.read_csv(file) data.plot(subplots=True) data.describe() res = pd.DataFrame(columns=["H", "T", "W"]) dir = "/kaggle/input/tlvmc-parkinsons-freezing-gait-prediction/train/tdcsfog" for file in os.listdir(dir): path = os.path.join(dir, file) df = pd.read_csv(path) res.loc[file] = [df.StartHesitation.sum(), df.Turn.sum(), df.Walking.sum()] res.plot() res.astype(bool).sum(axis=0).plot(kind="bar")
false
0
401
1
401
401
129270290
import pandas as pd dataframe = pd.read_csv("../input/predict-student-performance-from-game-play/train.csv") import gc gc.collect() def delete_columns_and_save(dataframe, column_names, column_name, save_path): # Check if the columns exist in the dataframe columns_not_found = [col for col in column_names if col not in dataframe.columns] if columns_not_found: print(f"Columns {columns_not_found} do not exist in the dataframe.") return # Delete the specified columns dataframe.drop(column_names, axis=1, inplace=True) print(f"Columns {column_names} have been deleted from the dataframe.") # Convert the values from milliseconds to seconds dataframe[column_name] = dataframe[column_name] / 1000.0 print(f"Column '{column_name}' has been modified from milliseconds to seconds.") # Save the updated dataframe to a new file dataframe.to_pickle(save_path) print(f"The updated dataframe has been saved to '{save_path}'.") delete_columns_and_save( dataframe, ["hover_duration", "fullscreen", "hq", "music"], "elapsed_time", "updated_data.pickle", ) gc.collect() del dataframe data = pd.read_pickle("../working/updated_data.pickle") data.info(memory_usage="deep") gc.collect() data["session_id"] = data["session_id"].astype("int32") data["index"] = data["index"].astype("int32") data["elapsed_time"] = data["elapsed_time"].astype("float32") data["event_name"] = data["event_name"].astype("category") data["name"] = data["name"].astype("category") data["level"] = data["level"].astype("int32") data["page"] = data["page"].astype("float32") data["room_coor_x"] = data["room_coor_x"].astype("float32") data["room_coor_y"] = data["room_coor_y"].astype("float32") data["screen_coor_x"] = data["screen_coor_x"].astype("float32") data["screen_coor_y"] = data["screen_coor_y"].astype("float32") data["text"] = data["text"].astype("category") data["fqid"] = data["fqid"].astype("category") data["room_fqid"] = data["room_fqid"].astype("category") data["text_fqid"] = data["text_fqid"].astype("category") data["level_group"] = data["level_group"].astype("category") data.to_pickle("output_pickle_file.pickle") data.info(memory_usage="deep") data2 = pd.read_pickle("/kaggle/working/output_pickle_file.pickle") del data data2.info(memory_usage="deep") column = "level_group" split_value = data2[column].unique() for value in split_value: data_splited = data2[data2[column] == value] data_splited.to_csv(f"{value}.csv", index=False) import pandas_profiling as pp pp.ProfileReport(data2) def verify(dataframe, column_name, value): if column_name not in dataframe.columns: print(f"Column '{column_name}' does not exist in the dataframe.") return if value in dataframe[column_name].values: print(f"Column '{column_name}' contains the value '{value}'.") else: print(f"Column '{column_name}' does not contain the value '{value}'.") verify(dataframe, "index", 0) def verify_column_empty_values(dataframe, column_name): # Check if the column exists in the dataframe if column_name not in dataframe.columns: print(f"Column '{column_name}' does not exist in the dataframe.") return # Check if the column contains any empty values if dataframe[column_name].isnull().any(): print(f"Column '{column_name}' contains empty values.") else: print(f"Column '{column_name}' does not contain empty values.") verify_column_empty_values(dataframe, "text_fqid") def verify_column_numeric_value_repeated(dataframe, column_namee, value): # Check if the column exists in the dataframe if column_namee not in dataframe.columns: print(f"Column '{column_namee}' does not exist in the dataframe.") return # Count the occurrences of the specified value in the column value_count = dataframe[column_namee].value_counts().get(value, 0) # Check if the value appears more than once if value_count > 1: print( f"Column '{column_namee}' contains the numeric value '{value}' more than once." ) else: print( f"Column '{column_namee}' does not contain the numeric value '{value}' more than once." ) verify_column_numeric_value_repeated(dataframe, "index", 0) def get_column_values(dataframe, columnn_name): # Check if the column exists in the dataframe if columnn_name not in dataframe.columns: print(f"Column '{columnn_name}' does not exist in the dataframe.") return None # Retrieve the values in the specified column column_values = dataframe[columnn_name].values return column_values name_column_values = get_column_values(dataframe, "fqid") print(name_column_values)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/270/129270290.ipynb
null
null
[{"Id": 129270290, "ScriptId": 38223136, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 6780400, "CreationDate": "05/12/2023 10:30:59", "VersionNumber": 6.0, "Title": "data analyst(PSP)", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 131.0, "LinesInsertedFromPrevious": 53.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 78.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import pandas as pd dataframe = pd.read_csv("../input/predict-student-performance-from-game-play/train.csv") import gc gc.collect() def delete_columns_and_save(dataframe, column_names, column_name, save_path): # Check if the columns exist in the dataframe columns_not_found = [col for col in column_names if col not in dataframe.columns] if columns_not_found: print(f"Columns {columns_not_found} do not exist in the dataframe.") return # Delete the specified columns dataframe.drop(column_names, axis=1, inplace=True) print(f"Columns {column_names} have been deleted from the dataframe.") # Convert the values from milliseconds to seconds dataframe[column_name] = dataframe[column_name] / 1000.0 print(f"Column '{column_name}' has been modified from milliseconds to seconds.") # Save the updated dataframe to a new file dataframe.to_pickle(save_path) print(f"The updated dataframe has been saved to '{save_path}'.") delete_columns_and_save( dataframe, ["hover_duration", "fullscreen", "hq", "music"], "elapsed_time", "updated_data.pickle", ) gc.collect() del dataframe data = pd.read_pickle("../working/updated_data.pickle") data.info(memory_usage="deep") gc.collect() data["session_id"] = data["session_id"].astype("int32") data["index"] = data["index"].astype("int32") data["elapsed_time"] = data["elapsed_time"].astype("float32") data["event_name"] = data["event_name"].astype("category") data["name"] = data["name"].astype("category") data["level"] = data["level"].astype("int32") data["page"] = data["page"].astype("float32") data["room_coor_x"] = data["room_coor_x"].astype("float32") data["room_coor_y"] = data["room_coor_y"].astype("float32") data["screen_coor_x"] = data["screen_coor_x"].astype("float32") data["screen_coor_y"] = data["screen_coor_y"].astype("float32") data["text"] = data["text"].astype("category") data["fqid"] = data["fqid"].astype("category") data["room_fqid"] = data["room_fqid"].astype("category") data["text_fqid"] = data["text_fqid"].astype("category") data["level_group"] = data["level_group"].astype("category") data.to_pickle("output_pickle_file.pickle") data.info(memory_usage="deep") data2 = pd.read_pickle("/kaggle/working/output_pickle_file.pickle") del data data2.info(memory_usage="deep") column = "level_group" split_value = data2[column].unique() for value in split_value: data_splited = data2[data2[column] == value] data_splited.to_csv(f"{value}.csv", index=False) import pandas_profiling as pp pp.ProfileReport(data2) def verify(dataframe, column_name, value): if column_name not in dataframe.columns: print(f"Column '{column_name}' does not exist in the dataframe.") return if value in dataframe[column_name].values: print(f"Column '{column_name}' contains the value '{value}'.") else: print(f"Column '{column_name}' does not contain the value '{value}'.") verify(dataframe, "index", 0) def verify_column_empty_values(dataframe, column_name): # Check if the column exists in the dataframe if column_name not in dataframe.columns: print(f"Column '{column_name}' does not exist in the dataframe.") return # Check if the column contains any empty values if dataframe[column_name].isnull().any(): print(f"Column '{column_name}' contains empty values.") else: print(f"Column '{column_name}' does not contain empty values.") verify_column_empty_values(dataframe, "text_fqid") def verify_column_numeric_value_repeated(dataframe, column_namee, value): # Check if the column exists in the dataframe if column_namee not in dataframe.columns: print(f"Column '{column_namee}' does not exist in the dataframe.") return # Count the occurrences of the specified value in the column value_count = dataframe[column_namee].value_counts().get(value, 0) # Check if the value appears more than once if value_count > 1: print( f"Column '{column_namee}' contains the numeric value '{value}' more than once." ) else: print( f"Column '{column_namee}' does not contain the numeric value '{value}' more than once." ) verify_column_numeric_value_repeated(dataframe, "index", 0) def get_column_values(dataframe, columnn_name): # Check if the column exists in the dataframe if columnn_name not in dataframe.columns: print(f"Column '{columnn_name}' does not exist in the dataframe.") return None # Retrieve the values in the specified column column_values = dataframe[columnn_name].values return column_values name_column_values = get_column_values(dataframe, "fqid") print(name_column_values)
false
0
1,357
0
1,357
1,357
129747165
<jupyter_start><jupyter_text>Linear Regression # Context This is probably the dumbest dataset on Kaggle. The whole point is, however, to provide a common dataset for linear regression. Although such a dataset can easily be generated in Excel with random numbers, results would not be comparable. # Content The training dataset is a CSV file with 700 data pairs (x,y). The x-values are numbers between 0 and 100. The corresponding y-values have been generated using the Excel function NORMINV(RAND(), x, 3). Consequently, the best estimate for y should be x. The test dataset is a CSV file with 300 data pairs. # Acknowledgements Thank you, Dan Bricklin and Bob Frankston for inventing the first spreadsheet. # Inspiration I hope this dataset will encourage all newbies to enter the world of machine learning, possibly starting with a simple linear regression. # Data license Obviously, data is free. Kaggle dataset identifier: random-linear-regression <jupyter_code>import pandas as pd df = pd.read_csv('random-linear-regression/train.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 700 entries, 0 to 699 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 x 700 non-null float64 1 y 699 non-null float64 dtypes: float64(2) memory usage: 11.1 KB <jupyter_text>Examples: { "x": 24.0, "y": 21.54945196 } { "x": 50.0, "y": 47.46446305 } { "x": 15.0, "y": 17.21865634 } { "x": 38.0, "y": 36.58639803 } <jupyter_code>import pandas as pd df = pd.read_csv('random-linear-regression/test.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 300 entries, 0 to 299 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 x 300 non-null int64 1 y 300 non-null float64 dtypes: float64(1), int64(1) memory usage: 4.8 KB <jupyter_text>Examples: { "x": 77.0, "y": 79.77515201 } { "x": 21.0, "y": 23.17727887 } { "x": 22.0, "y": 25.60926156 } { "x": 20.0, "y": 17.85738813 } <jupyter_script>import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session df = pd.read_csv("/kaggle/input/random-linear-regression/train.csv") df2 = pd.read_csv("/kaggle/input/random-linear-regression/test.csv") # df.describe() import matplotlib.pyplot as plt df2 = df2.head(5) df2 df2.describe() plt.scatter(df2["x"], df2["y"]) X = df2.iloc[:, :-1].values y = df2.iloc[:, 1].values # # Train size 20% from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.20, random_state=0 ) # change here regressor = LinearRegression() regressor.fit(X_train, y_train) LinearRegression() y_pred = regressor.predict(X_test) regressor.score(X_train, y_train) * 100 plt.scatter(X_train, y_train, color="g") plt.plot(X_test, y_pred, color="k") # # test size 65% X = df2.iloc[:, :-1].values y = df2.iloc[:, 1].values from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.65, random_state=0 ) # change here regressor = LinearRegression() regressor.fit(X_train, y_train) LinearRegression() y_pred = regressor.predict(X_test) regressor.score(X_train, y_train) * 100 plt.scatter(X_train, y_train, color="g") plt.plot(X_test, y_pred, color="k") # # test size 90% X = df2.iloc[:, :-1].values y = df2.iloc[:, 1].values from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.90, random_state=0 ) # change here regressor = LinearRegression() regressor.fit(X_train, y_train) LinearRegression() y_pred = regressor.predict(X_test) regressor.score(X_train, y_train) * 100 plt.scatter(X_train, y_train, color="g") plt.plot(X_test, y_pred, color="k")
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/747/129747165.ipynb
random-linear-regression
andonians
[{"Id": 129747165, "ScriptId": 38503234, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11872301, "CreationDate": "05/16/2023 07:36:06", "VersionNumber": 3.0, "Title": "Linear Regression", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 129.0, "LinesInsertedFromPrevious": 80.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 49.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
[{"Id": 186099150, "KernelVersionId": 129747165, "SourceDatasetVersionId": 2242}]
[{"Id": 2242, "DatasetId": 1256, "DatasourceVersionId": 2242, "CreatorUserId": 1023487, "LicenseName": "Other (specified in description)", "CreationDate": "05/11/2017 20:15:52", "VersionNumber": 2.0, "Title": "Linear Regression", "Slug": "random-linear-regression", "Subtitle": "Randomly created dataset for linear regression", "Description": "# Context \n\nThis is probably the dumbest dataset on Kaggle. The whole point is, however, to provide a common dataset for linear regression. Although such a dataset can easily be generated in Excel with random numbers, results would not be comparable.\n\n# Content\n\nThe training dataset is a CSV file with 700 data pairs (x,y). The x-values are numbers between 0 and 100. The corresponding y-values have been generated using the Excel function NORMINV(RAND(), x, 3). Consequently, the best estimate for y should be x.\nThe test dataset is a CSV file with 300 data pairs.\n\n# Acknowledgements\n\nThank you, Dan Bricklin and Bob Frankston for inventing the first spreadsheet.\n\n# Inspiration\n\nI hope this dataset will encourage all newbies to enter the world of machine learning, possibly starting with a simple linear regression.\n\n# Data license\n\nObviously, data is free.", "VersionNotes": "Removed test data from train.csv", "TotalCompressedBytes": 14845.0, "TotalUncompressedBytes": 14845.0}]
[{"Id": 1256, "CreatorUserId": 1023487, "OwnerUserId": 1023487.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2242.0, "CurrentDatasourceVersionId": 2242.0, "ForumId": 3457, "Type": 2, "CreationDate": "05/11/2017 18:52:26", "LastActivityDate": "02/05/2018", "TotalViews": 311746, "TotalDownloads": 52634, "TotalVotes": 559, "TotalKernels": 417}]
[{"Id": 1023487, "UserName": "andonians", "DisplayName": "Vahe Andonians", "RegisterDate": "04/14/2017", "PerformanceTier": 0}]
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session df = pd.read_csv("/kaggle/input/random-linear-regression/train.csv") df2 = pd.read_csv("/kaggle/input/random-linear-regression/test.csv") # df.describe() import matplotlib.pyplot as plt df2 = df2.head(5) df2 df2.describe() plt.scatter(df2["x"], df2["y"]) X = df2.iloc[:, :-1].values y = df2.iloc[:, 1].values # # Train size 20% from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.20, random_state=0 ) # change here regressor = LinearRegression() regressor.fit(X_train, y_train) LinearRegression() y_pred = regressor.predict(X_test) regressor.score(X_train, y_train) * 100 plt.scatter(X_train, y_train, color="g") plt.plot(X_test, y_pred, color="k") # # test size 65% X = df2.iloc[:, :-1].values y = df2.iloc[:, 1].values from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.65, random_state=0 ) # change here regressor = LinearRegression() regressor.fit(X_train, y_train) LinearRegression() y_pred = regressor.predict(X_test) regressor.score(X_train, y_train) * 100 plt.scatter(X_train, y_train, color="g") plt.plot(X_test, y_pred, color="k") # # test size 90% X = df2.iloc[:, :-1].values y = df2.iloc[:, 1].values from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.90, random_state=0 ) # change here regressor = LinearRegression() regressor.fit(X_train, y_train) LinearRegression() y_pred = regressor.predict(X_test) regressor.score(X_train, y_train) * 100 plt.scatter(X_train, y_train, color="g") plt.plot(X_test, y_pred, color="k")
[{"random-linear-regression/train.csv": {"column_names": "[\"x\", \"y\"]", "column_data_types": "{\"x\": \"float64\", \"y\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 700 entries, 0 to 699\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 x 700 non-null float64\n 1 y 699 non-null float64\ndtypes: float64(2)\nmemory usage: 11.1 KB\n", "summary": "{\"x\": {\"count\": 700.0, \"mean\": 54.98593909881429, \"std\": 134.68170287857308, \"min\": 0.0, \"25%\": 25.0, \"50%\": 49.0, \"75%\": 75.0, \"max\": 3530.15736917}, \"y\": {\"count\": 699.0, \"mean\": 49.9398691704578, \"std\": 29.109217100389873, \"min\": -3.83998112, \"25%\": 24.92996756, \"50%\": 48.97302037, \"75%\": 74.929911265, \"max\": 108.8716183}}", "examples": "{\"x\":{\"0\":24.0,\"1\":50.0,\"2\":15.0,\"3\":38.0},\"y\":{\"0\":21.54945196,\"1\":47.46446305,\"2\":17.21865634,\"3\":36.58639803}}"}}, {"random-linear-regression/test.csv": {"column_names": "[\"x\", \"y\"]", "column_data_types": "{\"x\": \"int64\", \"y\": \"float64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 300 entries, 0 to 299\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 x 300 non-null int64 \n 1 y 300 non-null float64\ndtypes: float64(1), int64(1)\nmemory usage: 4.8 KB\n", "summary": "{\"x\": {\"count\": 300.0, \"mean\": 50.93666666666667, \"std\": 28.50428569939964, \"min\": 0.0, \"25%\": 27.0, \"50%\": 53.0, \"75%\": 73.0, \"max\": 100.0}, \"y\": {\"count\": 300.0, \"mean\": 51.20505109827666, \"std\": 29.071480855972098, \"min\": -3.467883789, \"25%\": 25.6765022375, \"50%\": 52.17055657, \"75%\": 74.303007225, \"max\": 105.5918375}}", "examples": "{\"x\":{\"0\":77,\"1\":21,\"2\":22,\"3\":20},\"y\":{\"0\":79.77515201,\"1\":23.17727887,\"2\":25.60926156,\"3\":17.85738813}}"}}]
true
2
<start_data_description><data_path>random-linear-regression/train.csv: <column_names> ['x', 'y'] <column_types> {'x': 'float64', 'y': 'float64'} <dataframe_Summary> {'x': {'count': 700.0, 'mean': 54.98593909881429, 'std': 134.68170287857308, 'min': 0.0, '25%': 25.0, '50%': 49.0, '75%': 75.0, 'max': 3530.15736917}, 'y': {'count': 699.0, 'mean': 49.9398691704578, 'std': 29.109217100389873, 'min': -3.83998112, '25%': 24.92996756, '50%': 48.97302037, '75%': 74.929911265, 'max': 108.8716183}} <dataframe_info> RangeIndex: 700 entries, 0 to 699 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 x 700 non-null float64 1 y 699 non-null float64 dtypes: float64(2) memory usage: 11.1 KB <some_examples> {'x': {'0': 24.0, '1': 50.0, '2': 15.0, '3': 38.0}, 'y': {'0': 21.54945196, '1': 47.46446305, '2': 17.21865634, '3': 36.58639803}} <end_description> <start_data_description><data_path>random-linear-regression/test.csv: <column_names> ['x', 'y'] <column_types> {'x': 'int64', 'y': 'float64'} <dataframe_Summary> {'x': {'count': 300.0, 'mean': 50.93666666666667, 'std': 28.50428569939964, 'min': 0.0, '25%': 27.0, '50%': 53.0, '75%': 73.0, 'max': 100.0}, 'y': {'count': 300.0, 'mean': 51.20505109827666, 'std': 29.071480855972098, 'min': -3.467883789, '25%': 25.6765022375, '50%': 52.17055657, '75%': 74.303007225, 'max': 105.5918375}} <dataframe_info> RangeIndex: 300 entries, 0 to 299 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 x 300 non-null int64 1 y 300 non-null float64 dtypes: float64(1), int64(1) memory usage: 4.8 KB <some_examples> {'x': {'0': 77, '1': 21, '2': 22, '3': 20}, 'y': {'0': 79.77515201, '1': 23.17727887, '2': 25.60926156, '3': 17.85738813}} <end_description>
881
0
1,657
881
129730677
# # Making Decision with Hidden Information: Hidden Markov Models # # Markov Models # ![markov.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAgMAAAGQCAYAAAAzwWMnAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3dd5xcZdn/8c9uet8UkhBIowQIhN5BCEWagDyKiIIIAhYsdOyKiA/IY8HnQbH8FBAQkCJdQJCO9ICoVOlICakESEJCfn98z2RnZ2dmZ3fPnPuU7/v1mtfsTr12d/Y697nLdffBzIpsJPAA0AaMADYBDgXWBe4PF5aZmZklZRzwCPBv4G1gFmoMmJmZWUGMAz4eOggzC6s1dABmZmYWlhsDZmZmBdc3dABmFtwo4NPAUnSCMBn4IbA8ZFBmZmaWjJHAGXTsJTwM+H2YcMzMzCwN+gLvomWGZlYAnjNgZpWWAS8DO4UOxMyS4caAWXG1ACeg5YWV3gAmJRuOmYXixoBZcY0DTgXWqXLfCODJZMMxMzOzEL5c5bbJaM7AWgnHYmaB9AkdgJklZijwGDAGuC26bSJaUfBS9H0f4GzgIuCahOMzs0BcZ8CsWFrpeBJwOfAR4CRgABo6+CNwRfKhmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmVlTtYQOwMyargXtQDgNbUq0BjAElSEegiqRLgDeAhah7YufiC7PoS2NzSzH3Bgwy6f1gF2AXYGpwLPAU2gnwn+jA/98dPBfBrQBw9D+BZOAdVHDYa3ocX8FbgYeApYn+HOYWQLcGDDLj9HAp4CD0Bn9zcAt6ODfG2NRw2IXYHPgWuB3qIFhZmZmKTAZOBe4F/giOstvlgHAx4EbgeuBLZr4XmZmZtaFkcDpwIPA3gHefz3gErTL4VoB3t/MzKzQdgVmAQejbYlDmol6Jb4QOA4zM7NC6AOcDNwErBo2lA4GAD8DrkA9FmZmZtYE/YGrgG8Qvjeglo8C96FJh2ZmZhajPsDFaIJg2m0HPACMCx2ImZlZXrSghsDRoQPphl2Be1DtAjMzM+ulE4EzQgfRAwcCF4UOwszMLOvWB+5EJYOz6DeoUWBmZmY9dDOwUeggeqENLYEcGjoQM6utT+gAzKymD6J9An4dOpBeWIxWQWwH3B04FjMzs8z5M7B26CBiMBB4hOwOdZjlXlrXKpsV3SSgH/B06EBisBj4C7BX6EDMrDo3BszS6cPAZaGDiNGlwH6hgzAzM8uSq4GpoYOIUSvwaOggzKw69wyYpdNE4LnQQcTofeBVYHzoQMysMzcGzNJnBDAvdBBN8HdUN8HMUsaNAbP0WQ14OXQQTfAq6dpp0cwibgyYpc9oYE7oIJpgDjAqdBBm1pkbA2bp0wosDx1EEyzHhc7MUsmNAbP0WU4+C/TktZFjlnluDJilz1w0VJA3Y8jn8IdZ5rkxYJY+r5PPJXjjgNmhgzCzztwYMEufvE60mwY8FToIM+vMjQGzdFoEDAsdRMwmAy+GDsLMOnNjwCydHgC2Dh1EjEYB81ElQjNLGTcGzNLpbuADoYOI0QfQz2RmZmYNGkm+Dp6/BHYIHYSZmVnW3IQ2LMq6VuAx8lk7wSwXXA3MLL2GAjOAe0MH0kvboxoDV4YOxMzMLGvagPtCBxGDc1CDwMzMzHrgD8AWoYPoheHAQ0BL6EDMzMyyajvgwtBB9MIJwJdDB2FmZpZ1t6OCPVnTD3gEzX0wsxTzBEKz9JsPHAz8OXQg3XQI8CZwQ+hAzMzMsq4FuBOYGjqQLvRFcxwmol6Bh8jnHgtmZmZB7AH8LnQQXdgFeAt4A/g18I2w4ZiZmeXPTcAmoYOo41JgRXRZBlwA9A8akZmZWc5MB24jncv0+gCv094YWAG8CzxO+oc3zMzMMuVMNDEvbXYB5tKxMbAC7VL4FtprwczMzGIwHJgFjA0dSIXyIYLyhsAbwIEB4zIzM8ulPYFLQgdRptoQwQJUSnm1gHGZmZnl2u+Bj4YOIrIrqoVQagjMBk4inXMbzMzMcqMNDRdMCh0IcAVqBLwD/BOYFjYcMzOz4tge+CthK4n2BZYAbwOnB47FzHqob+gAzKzH7gJuBNYH/l5x30i0FHEqMAX1IKwCjI4uA4FB0TXAe8Ci6HpOdJkLvAA8H12eBP5T8T7bo8bATqjioJllkMf0zLJvGLA12uFwC2AGKgm8hPaD+Ytokt+b6EC/OLq8G71GP7ShUD/aGwxj0AZJU6LL0Oj5jwIPA3cD90avt6yJP5+ZNZkbA2bZ04oO+ntGl83QeP190eVR1FPwDLA8xvedjBoaM4CtgG3QEsfH0SZKN6A9FBbH+J5mZmYWaQU+APwf6qpfiuYLnIRKFIcaq18H+DxwFSoutAA4H9gHlyI2MzOLxQTgm8Cz6Iz7SuAgVHwobfoDewHnAvPQEsOfAOsFjMnMzCyztgL+iCb0PQx8ARgRNKLuGQB8DG2utBy4FfUWtIYMyszMLAt2R+Pu7wF/QHMDsm4N4KfAQjS/4NN4CaKZmVknM1Ej4B104ExDMaG4tQFfQ8MHTwCfwD0FZmZmrI0m3y0BzkJzBPJuGPAtVMfgfmDbsOGYmZmFMQhV7FsC/AlYK2w4QYwGfo5WRlxA+nZiNDMza5qdgKdRRb9dAseSBhugIZI30XwCMzOz3BqEhgKWAqdF35u0oBUTC4BrgXFhwzEzM4vfxmhHvyeAzQPHkmYT0TLE14EPBY7FzMwsNoeiPQB+BQwJG0omtKJVB0uAH+BliGZmlmH9gF+g7X0PDhxLFm2PSi/fCIwKHIuZmVm3DUGb9zyHhgisZ1ZFuyP+Cw0hmJmZZcJIdAD7B7B64FjyYABwKeol2DBwLGZmZl2agLYP/htaR2/x6AP8GhUq2j5wLGZmZjWtB7wIXI2XDTZDC3AymoPhlQZmZpY6a6HlcL/Fs9+b7Ri00mD30IGYmZmVjAWeQrsMeuOdZJyIegi8r4GZmQU3HHgYuBlNdLPk/AiVMF43dCBmZlZcg4Db0c57QwPHUkQtwO+Al8jnls9mZpZyrcAVwOPAmMCxFFk/4HrgMbQ1spmZWWK+iyYMTg4diDEEeBTN2TAzM0vEbmjnwd1CB2IrrY12PDwydCBmZpZ/q6FJayeGDsQ6+STwDq5SaGZmTdSKVg3chCavWfr8Gi3zHB46EDMzy6djgdlo8xxLp0Fo/sA5oQMxM7P8WQMVudk/dCDWpQ1QhcKdQgdiZmb50YKGBq4JHYg17AzgSVwIyszMYnIQMB/tSGjZMBRtGvXV0IGYmVn2DQJeAI4LHYh12/5oaGdK4DjMzCzjvodmp/cPHYj1yA3A5aGDMDOz7BoLvAXsFzoQ67FpwHvAFqEDMTOzbPox8CCuKZB1FwB/Ch2EmZllz3hUzW6P0IFYr00HlgEzQgdiZmbZ8gO0NbHlw2XARaGDMDOz7BiM9h84IHQgFpuN0dyBaaEDMTOzbDgKeA7oGzoQi9V1wG9CB2FmZtkwCxeryaPdgUV4EyMzM+vCVqg72ZsR5U8r6vH5XOhAzMws3X6Fi9Tk2beB+0IHYWZm6dUPTRz8r9CBWNNMBd4H1godiJmZpdOewEK0H4Hl1wPA10MHYZYmraEDMEuRjwLXAu+GDsSa6o9oEyMzM7MOWoBXgANDB2JNty4aKvAkUTMz62BjVLJ2dOhALBH/Bj4dOgiztPAwgZnshsaS54QOxBJxE6o7YGa4MWBW8gHg1tBBWGJuA3YIHYSZmaVHCzAb2Dt0IJaY1YAVwOTQgZilgWuvF8O+aEz8faAN6AOcAswLGVSKTENzBf4WOhBLzCvA88C2wAthQykE5yCzwPYBtq647cfA3QFiSatPoAllViyXAz8MHUQBOAdlgOcM5N/hwBcrbrsenRGtk3w4qTQDeCx0EJa4x9Df3prLOSgD3BjIv3NQIZ1ypQp7njkvG9LcxsBRwIOoS3QB6qK+GZhS9ph90JnSXGB+9Lj7gB0rXmtb4Em0+95C4BFgu+aFnmuPob+9NZdzkFlKXQycHzqIFHkSOLjJ79EHHbxXAGvUedzfo8fsW+cxrcCzwMdji66YZqAx7MGhAykg56CUcc9AsYwCTkRnqJ8NHEtatACT0GSyZloO/Cf6emmdxz0ZXb9X5zFrABcBl8QQV5E9h/7+XlGQHOeglPJqguLYDZVh7Qfcievvl6wKDKT5jQFo7xIdDbxc5f5+tI9hj6rzOl8CvhZjXEW1CO1SORV4PHAsReAcZJYyPwfODh1ESmyOztr7JPBeV6MhgJ1r3P9l4IroMV+u8Zg9gP3iD62wHgaODB1EATkHpYyHCYrpdNRFt1voQFJgNJq0tzyB9yrvGag0DhgB3BN9X61noB+wF3Bl/KEV1pvAmNBBFJBzUMp4mKCYXgJeBT6CarTnUQv6fNcbewcdCJKa0VyvMfAVlCBLW+tWawwcBfyiwffaMnrNF4DhwAWo8d8XddGWDAFORePm90cxbAl8DHgHGA98F3gtenwb8C1gAvAM8B001PIF1KMxBHUFfwkNvWwGfBj9HaYCi4FjgSUN/hzNNofGNqdqQb1Hy5obTmEUIQeZpcbawOvAzCr3/RO4LtFokrUb8BZwDfAhoH+Nxx0F3JtQTF9HB8xvVty+BXBQ9PW+0WMuqHjMeHTgbcQBaFVC+Ra9Z6Lke07FY7+BGh7rRO/7qeixrcBW6Mz562WPPw01LtaOHv9R4KeoV6PkWuBGtFzypLLb+6Alkyc0+HMk4RfA/6txXwtayvk71Gj4flJB5UiRc1CmuGcg3wbR3gIv1wedpV2ReETJ6Y8OVnujTYiWALNQ8r+R9jPTgehsNQnVegZagANpP0DOja4rewaORWfwXdkbuBCd3Zf/3e8Ajo6uy42I3nPd6PsjgF3Qkrs1UM2DUsLuG8W7EM21KMW1Kx1/hy8Bn0G7QH6r7Pbl6MCwWZW4d45ec3VgTbRp1Jk0/0x8MTCs4rb10e/ho8AAYBX0c3sJYvcVOQeZBTEULUs7uey2FpSMV6147JdRF+7IJAILZG/a1/WXX+ajA9IN6Cz82yTXTfnRKIbzym77FLBp2ffrRY8p763YhsbqIAxAZZWrzSk4ms41DjaLYgL1kKyI3quWLVBvAcDx0eM3qvK4W9HveHjF7cNRI+O7FbfvjOZClAwFbgEurRNLXE6P3md91MPxIor9fTp/dn6cQDxZ5hyUYe4ZyJdWOs6KXwGcARyCNuN5G53dDEOJPcubhAxAiaR0GVX29TC0KcqAKs8rdWfvDnwQJau/NzvYSGXPwDB0Fnx+lceUegZaUc/BMQ28/hfQwb7yYAs64L6MihWVPIp6S0Dd4bOpP2TyQNnX26N1+o9WPKYVNW4uQ42xcpui33fle2wN/HfZ94vQz/Ik6nW4uU5MvTUMjVt/hK4nVH8ADdW8hf53Spe5ZV+/07RIs6FIOShX3BjIj0VofK7SUmqPiabREDSZbTIqBlS6nhjdV7KUjkm4/OsX0EHnQzXeY1H0/Aej5zUygSwOlY2BLwFnVTymlBxLjYFDUU/CigZef2b0uMrSr63ADqgefLnyLvhd0Rl9I+/TGr1XtS7ejVAPwK1V7tsLTSS8r+y24WiC4ZXAv8pufwpNWtyW5jYGFqAy0P8G9kQ5cRT6/FR6AS1FHB49ZiKdG6LlQwmLUaGpF6PL82VfzyV/8pKDCsmNAQthddQdvh46M56MZqf3QWcOz9OeNO8o+3pRN95jOR0PdvNRUirNG7gh+v4EOnZRN1N5Y2Aampz3ZsVj3kNn1KWDzFQ0ga0Rm6Lf3fyK2zdBqwD+WuN5M9DSxmoH8GpKr1ft8aW9FCrva0Hlk6+viG8Z+tuvS8fGAOhg2uxx+v6ox+Sw6Pta8wVAn8HKhlY9g9DPNim6bIIaPpNp7x5/lfaGwuPod/A8ySx1NVvJjQFrpsFoI5hNo8v6KMG+gpLeE2i89gV0BhV3AmwF3kBr93+JDoaVSw2XUH04oRlKZ4Oj0cHnW3UeNxx195/SjdcfTfUDfqnIUem+o4GfVbn/Lw2+z07R9e017nsK/Y3LbYMOiMdH36+JehGuQGfYlUZFj7+vyn1xGkDHZY7/RJMij0Pd2Iej+SfDaazXpNy7qMeh3vbYq6Kfc2r0foegoZ7lwNOo8fpwdKls5JnFxo0Bi0sLOtPfBu2iNwMdeP+OEtkvUaJNqgTp39A48F+pPyP9LTpPdGuWxWhMeRTwZ2o3fuagHQ2fpHs1EJ6g874HA1Bxl4VojL8VLVMstxNqkNU7aJWbGT32pYrb+6CegYurPGd39PPeEH1/NJq8WcsR6PNyVYMx9VRpNUWlFajuwv3R91ugz0rcXo0ulY2efqj3aFPUGPkO+pw+hT7b96AdK7uqo2Fm1lStqNvzBLT07BHgD2iW8GZkp6G5N+0bCCXhRfR7qudG9PvsbonkQ9BM+NJEuMFoBvx5tHfB74F+5pIWdDBsdEy3Fc1r+E2V+zZDB9GPVbnvR7RP1FyD+g2Baaig0dQGY+qN67qIJU1a0HDKYejv9QAajjkZzQmpVUvDzCxWq6I67peiWeTnoQPQaiGD6qVt0Nl0tQljzXADmjNRz1lohn1PHA38Gg1BfBeN7Y8G/ogOesdXPH4UGkrZocHXH4VWHVTbX2Ev1NgZWuW+1dCB67voLLfWzP1+aMJgtSWLzXAfWlaZVaNQD9jPgYdQj9PxqEFlZhabjdFB5B6UpI9FwwF5sSY6m01qRYHVdxb1ax3E7XnaS0DnweponsOf0PDc/6CGnvehMbNumw58DyWTi9A697agETVPPzSnYNOuHmhN91W0lr/cKk18v9LfvlpFxDwYiJZLno2GnX6Glmom1QtmZhk0Fs2gvg/N8P44xSm/+iLqarVwDqR9lUJJK7W3co5DkXqF+qD9On6LGganAmsFjchSJSuTvKw5WtGZw2fQDPPzUcJYEDKoAJ7FiTGkD6L5KI+juQItqELdhmhCZLOsiT7rSe1aGdJyVHb7JrTC5MPA/6FJh+eiipFJrfQxs5QYgcb+Z6Guww3DhhPcWXQsCWzJGUv1PSRKl2ZOJDwBuKuJr58Fk9CEzkdRSegsTwY2swZNQWcDD6Pu18rd2orqc3SusW/5dx6qRmmaP/EJ4E40T8hzaMxyaG1U0vZOtFOfZxZ3tDWqQjcwdCCWqEfI9rLCZtkaFXu6hmRXdphZk0wFLkBV+D4YOJY0G4DGS7cLHYglZjhaSbBx6EBSbCNUn+JGYMvAsZhZD4xBe7PfA+wSOJasuBs4KXQQlpjd0FyF7lZ5LKINUC/BJXiirVkm9EPVx2ah8T+vJ27cGcDVoYOwxJyCZtZb43ZEQ41noknIZpZCO6JNVU7GY989sRvaIjmpHQwtrPtwT1BP7YP2RTgEn3CYpcYY4EJUKGhy4FiybADala5azX3LlzFo3f2M0IFk2Ai0LPkWNEHZzALaD82I/nDoQHLiWrTLn+XbwcDL+Kw2DpuhXpaj8Sols8SNQKsE/kAxSqkm5XBUmtgHiXy7Cvjf0EHkyABUsOhWVMvEzBKwBSoadEDoQHJoJKo30NPtgy39huNlpM2yFcpN3ufDrIlaUAnhu4A1AseSZ9eh2dKWT4cCL+Eu7WZpAy5HlU49GdcsZoPQkMDP0MYi1jwHAG/iFRl5dSfwg9BBFMCXgduBcaEDMcuLVYE70M6C1nz9gTfQlrqWL+ugVQQunJOM7dGwweahAzHLug1RAaHtQwdSMD9CZzWWLz9FS+EsOWui1Qb7hA7ELKu2Ah7Ea3hDmAK8h/4Glg8jUflhH5SS14b2RzkidCBmWbMPcC8aIrAwLkFbuVo+fB14Ak8cDGUQWtJ5QuhAzLLiv1AXdVvoQApuC9Q74J6Z7BsMvIrPTEPrC5wPfDN0IGZptwfaaXBk6EAM0DLD80MHYb12AvA8XomTBn3Q/9Q3Qgdilla7o2103SOQHpuh3oHpoQOxHhuKVoccFjoQW6kvGoY7JnQgZmmzOdoFzKWF0+cytJe7ZdMPgX+hA5ClR1+0F8gnQgdilhZT0fLBKYHjKKpNqL/b41RUvnaPZMKxGE1H5aXr7UQ5DNglmXCswmBUQ+WDoQMxC60NLR/cNHQgBdWKhma62vr5NHR26THn7GhBG+f8oYvH9UeFcVwpL4xx6Pe/XuhAzEJpRd3P+4cOpMAORwf6rgwFXgC+09xwLEYHAQtobHnuAcBZzQ3H6lgXDZOOCB2IWQgnA6eGDqLABgEP0XgC2hNYjCcTZsFw4BXg6G485zb8tw1pP+BKvH24FcwewPW4AEpIpwJHdfM5F6JiUJ6Mlm7nonk43fk7bYsmtFk4PwS+FjoIs6SsAjyCxyhD2gQVdurTzee1oeGCU2KPyOJyKLCInp3l/yJ6voXRF00odBlwy70WVJLzQ6EDKbABaOOUaT18/k7AUmCH2CKyuExHDYFP9/D5Q9CE3kmxRWTdtQaaUDgsdCBmzfQ54MzQQRTcD4Gv9PI1TkXlbSf0PhyLyWDgH8A5vXyd7YAb8dh1SIcCZ4cOwqxZVkXjmENCB1JguwB/ofdzNVrRnI+/4eWGaXEOagwMjuG1foqr44V2FfXrQ5hl1uW4uEZIk4FHie9sfjTwLKqz7rPIsI4H3gLWj+n1+gG3ADNjej3rvklotc/A0IGYxenDwHmhgyiwQcBdwJYxv+5awGt4jXpIn0RLPuOuEDkONR6nxvy61rjjcG0Py5EBaELM2NCBFFQLcAHwmSa9/uborPSkJr2+1bYragj0dMJgV7ZBjUifnYbRB7gfmBg6ELM4nIgPFCGdBvykye+xMzooNavBYZ1tgRphxzX5fQ5DxXBcWyKMXYDfhw7CrLdG4XGvkL6CxvSTKO70SbQpzkcSeK+i2xCYjVaGJOHLJPc5ss6uR40/s8z6H1Qj3ZJ3CJqRnOQZ3edQg+DIBN+zaHYA5gE/J9mJmz/Ac0NCmYGWe5pl0hg0V6C7Ve6s9z4B3IwmDiZtP7Tl8ekB3jvvPgy8Q7jf7S9Qo8CSdymwfeggzHridODg0EEU0OdRQyBkBbOZwHx0Jumu5XgciuZlfD5gDC1o/skv8N81aeuj4QKzTBmJ5gq4VyBZX0WTvdIwR2MTtOzwYuIphFNUrWgviHdJz3yMr6JNq/qFDqRgLif+5cFmTfVVwp7BFE0L8CPg/5GuBtiawD9RZTxvj9t9Y9FY8evAjoFjqXQsmpMSYiiqqLYB/hA6CLNG9UXFSlx2OBlD0RnDqaSzEuAg4GdorPvowLFkyQ7AK2h3ydUCx1LLp4C7gdVDB1Igd+Hft2XEASS35KnoJqJk/KnQgTTgELSj3u9xQ7GeFtRoWoIaUWnvit8a+DsuXZyUg1DD3yz1bgamhA6iAD6AVmtkaf3xdDRk8Dg+eFSzHnArmmuRpX08JgB3AEeEDqQA+qOe17Q3Eq3gJqNd8ax5+gDfRBvJjAscS08MBs4AlqIyyePDhpMKQ1ClyCXARWTzdzIQ7T/yWzR0Zc3zY2Cf0EGY1fNdvJywmSainpfTyf6ZwTTgJrQE8WiKW+52H+A54Glg98CxxOFjqMdqm9CB5NgGwBWhgzCrpQWYhZeRNctHUJL9QOhAYtSC5ju8hj47HyKdkyCbYSO0bvxt4Buo+zcvJgO3ASfjegTNcg+wSuggzKrZAm+o0QyjUffrH4ARgWNplja0NHIRahTsT34PIhuganLL0JDAlKDRNE8/VK3wZrTVtcXrWOCzoYMwq+Z0YN/QQeTMx9BkoUNCB5KQMWim9Hw0yfAQ8jN8MBO4FjUCLkONgiLYBC2HO5l89X6ENhH4c+ggzKp5EBcgicsENCb4e7TzY9GMAL6FduZ7Ec1FyeKe7m3AF4FHUCnh3wDrBo0ojL5oXsjfgK0Cx5In91DM/GAptgFwSeggcmAgqt74ICo6U3RDUFfo/eiM+kbgM6Q7AQ5C8zsuRoWWnga+RjZXfsRtbTRscBYa/rLe+RrZqDFiBXIC8OnQQWTcPmhI4GTSsbdA2myItsR+Hi3BuxGNm6bhTHs14HA0F2Ah8CbqBZhJcSZEdsc+aO+SrwIDAseSZRui5blmqXEjLpHZU5uhmgG/wmePjWhBVe/+G62ueB94CU3G+xKwOc0druqLiicdivaC+BewAngGnfHuQfaXfSahPxo6eBDNjbHua0EnEHmdbJsot9p7byCqPubdtLpnXTQWPgw4CR1UrPvGAzsB26L93mdEtz+Dqh0+F11eAN5A8xDmAG/VeL3BqAt7NGqcTUQz/tdAf7P10YHsKTRmexf6/D8d609VHOOB76Pf78nAnUGjyZ7zUNnqh0MHknV5bgyMRGcwU1Eym4TWpZYS3UB0BlXqkn4PLe16DyXLOcBclESfjy5PAv+peJ8paH34z5vzY+TOVDTWtw7amvavCb3vaNSVPQVYjg56g1HXe54SyUB0wJ6ByvtOjS6lz3/l//wC1LvQVuW+eajX4XnUoHgceAw1MhY2JfriWhfVW5gEfA+VZbau7YFy9i0VtyeV/3MjL42BYajrdDu03n8GOqNZQvsf80W0Feqb6A+9OLq8G71GP1RGtB/tH5gxqIDIlOgyNHr+o+gAcjeaIfxGM3+4nJgCfB0lve+jiVRJGQUcA/wELdsrORiVj/1QwvGEVPpsD4u+H4G6WUuNgndQEpyDJi1asqajxvJk4Dto10arz/m/wFrREp2TgftQ0lqI9gU4FY3BrUP8e9tPBvZGB7Ur0YdrBeri/jHaXMWT3zraHI1n/xXYJVAMXwCeRWfK5fqg7vLKswqz0GagCZk3AXuSnxO3ODj/F1wrKkX7f6irZik6wJyECnvE/Ydv1DrA54Gr0IFlAXA+mjFc1CIjLcCu6HdyNRrPDmlf1HrfpMp9b6Cxb7N6dka55n+B69AKoiQKQq2BxsQfQRMOi1rLxPnfmIB2qXsWdetcifa0Hh4yqBr6A3sB56Lx1tmoa7ryjDSvBqN18Q+if9o1wobTpdGoa/x/QwdiqbYz+r8uGYp6ky5NMIYxwLdRo+Ab0fdF4PxvbAX8EU3oeBh19WapNv0A1F11E5qwditqLeZxGcz66IA6C413prkoToxea8gAACAASURBVLkT0Rj55NCBWKp9o8pt01AX8a4JxzIQOBKNVV9IfotzOf8bu6PlNe+hzWm2CBtOLNYAforGtR5HBYpCdWvFpT/6sN+IPvAfI1t19CcA/0bV8sxqGY7GpadXue9VNMkvlM1QfY6HUQGjPFQ1dP43ZqIPwTvoFzcpaDTN0YbOnGcDTwCfIHstxU3R3+dRtCogS3+nwWjc9Vdo74NqCd6s3GC0vLJao/E5tEFZaCPR5/ohtO5+F7KXV2bi/F94a6PJF0tQBbMJYcNJxDC0Ec1cVHc+9AS7rqxG+0Yr16BegCxXmRuEunf/AuwXOBbLplGo+/e/QgdSYTM04XBWdF1t0myaOP+nP/833SDUql4C/Ili7vk9GhUrWopqbI8NG04Hbajk7F/QrN3PkK0xu0asjiYlfT50IJY5JwF/J71ndgNQQ+VPqBF/DGrUp4Xzf7rzf2J2QuVLnyTc2vM02QB1kb1JczY82oDGut1Go4P+dSiBfL3B52XZRWhJ0KqhA7HMmIbKPU8NHUiDRqEG782oiNGxNPZ/PRjYsQnxOP931Oz8n0qDUFfQUuA0irtmtpoWNGN2AXAt8WzaMxA4Ex3sapVLHgUcgrr/Z6FCHtNieO80qTex8SQ0K/zAhGKxbOuHDqobhQ6kh7rz//4xVKHvz8Rz1ur8X1sz8n9qbQz8E02c2DxwLGk2ES1DeR2Vye2pHdAEp7fRwe7Jsvs2RBNZbqf9TCGvy+sGoSIlf6px/5Ho91Nt+ZhZpbOAbUIHEZPynsD70AF6e9pnuv8B/W8sQ5PevkzPKyE6/zcmrvyfWoeiFuavgCFhQ8mEVnSwXgL8gO4tQ2lDM4pno3/k0mUecDb6h7wGFQcqQtf4GJTMzq1x/w/Q72fvpAKyzPoqqoJXbpUQgTRBaVJtqdrhH1EN//IcMj+6b51uvvahOP93R2/yf2r1A36Bzk4PDhxLFm2PzmpvpLFCPvsBL6M1uisqLotJroRqKCPROuXKn/E31F79cC/ayGRo88KyHDgQjXWXa0Vny3nTgqouLqJzHlmBTjTOoOtyu87/vdPd/J9aQ9BY03Ooi8h6ZlW0O9a/UBdSNROix5SGBGpd8r7N8g3o5zyy4vapqEFQ2RNyLBqj2775oVkGDEXDaSdX3P5B9FnZKLpsjHoIvggckGB8STqA2o2BFWjc/2VqD5k4/8ejkfyfaiPRD/APtHzLemcAqoP+HzTeX25dtN55GWoMvIPq7Vf7B36SfPsq2qq0WoJqA05BBU1+jjYT+S3pWm5lYQ1Fs9y/X3bbWFQ5rtZBMasTCbtSmi9Q7fIuyjVLo+8Pq3iu83+86uX/2DRjW8wJ6AztbTQOO6cJ71FEfdCY//5oF767ottbgPGoG6lyH+4J0X1jovuXocaDmVk9V6NtlOehJW+vofLLL6FhgjdQbp8b3bcsep7zf3PUyv+ptR46M7saLxtphhbUhfk2OZ1pamaZ5fzfXJnJ/2uh5RC/JSezH1PsGDTTdPfQgViuHA4cFzoIyyTn/+SkOv+PBZ5C40xpLc+ZNyeiFmLh61pbr6yK1s7/BC0b+2bYcCyDnP+Tl8r8PxxtoXkzmuhgyfkRGs/zPACLw6W4MWDd4/wfTqry/yBUwe5+vE47hBbgd2hST973ELDmc2PAusP5P6zU5P9WtCf842i2uoXRD7geeAxtjWnWU24MWKOc/9MhFfn/u2jCSF5r2mfJEOBRNGZn1lNuDFijnP/TI2j+3w0VnNgtxJtbVWujanqV1ffMGuXGgDXC+T99guT/1dCkhROTfFNryCdRBcKmVamyXHNjwLri/J9eieb/VjRr9CaaU73Qeu/XaJnP8NCBWOa4MWD1OP+nX2L5/1hUirIIW99m1SA0fnRO6EAsc9wYsHqc/9Mvkfy/BipysH8z38RisQGqUFW55apZPW4MWC3O/9nR1PzfgrqGrmnGi1tTnIF2KXQhEGuUGwNWjfN/9jQt/x8EzEc7Ulk2DEWbhnw1dCCWGkNRgji5xv3XA99JLBrLCuf/7GlK/h8EvIA3MMmi/VHX3pTAcVg6DAWeBr5fdttwdBZxIUr4z6PxRjcKDJz/syz2/P89NDuxf1wvaIm6Abg8dBBmlknO/9kWW/4fC7wF7BfHi1kQ04D3gC1CB2JmmeL8n32x5f8fAw/iNaVZdwHwp9BBmFmmOP/nQ6/z/3hUzWiPWMKxkKYDy4AZoQMxs0xw/s+PXuf/H6CtKS0fLgMuCh2EmWWC83++9Dj/D0b1pw+INRwLaWM0djQtdCBmlmrO//nT4/x/FPAc0DfuiCyo64DfhA7CzFLN+T+f6ub/WhNDZgEXAz9sRkQWzO5omckEYGHgWIpoADrrGgr0A9qi62HR/YNprxg2jPZk3Ib+V1uir4nuKz2P6HWGdvH+A9G68UYsRWuU63kXWFz2/TuoDCrAInQmAtpa9X1gBaplALCc9s9g6XkLo+csiL5/p8FYLV7O//lUN/9XawxsBdwFTAJebWpolrRW4N/A6cCvAseSdkPRwXZ4dN0WfV36vnTfSNoP7iNpPyivQP9fK6LXa0EHznfRcq1lwDx08FsUPaY7B9DKg3X5c2spvWajSo2QWlqBEWXflzc2utOwGRQ9d3j0uDa0rn0I1X+PpUZIeeNhIfq9ll/Pi74uv31BYz96YTn/51fd/F/tH/1XwBjgo82NywL5NrA3+qfPs1ZgFDpAj6pyKb+9DR2Myi2i40FkPjqQVB5cSgec96LHlB/crTlKjYdhtPewlBpp5dflt5duG1HxWqVG2dyy68pL+e1dNbiyzvk/32rm/8rGQD/UGjwSr0vPq6modTgNeCZwLN0xHC13WiW6jEdFUcZE35cO7KXP9Ps0ltxL35e6t61Y+tKxYdhV47Ff2XMXoM/P7LLL69HlzbLbQpmOyk832oBx/s+/mvm/sjGwJ3AJMA51xVk+PQBcAZwWMIYW9Dkbj/ZHLx3kS1+PQQf71ujxC1GiKk+6b5R9XzrAd6cb3Kw32lADYZWyyzj0uS2/rWQOHT+zr6JGw+vAK9H1shjjeyW6/hFwNh3nd1Tj/F8MVfN/ZWPg/6Gxvk8mFJSFcSJwILBZk15/NDqor4YO9quXXY+jfSz5NdoT4WyUGMsP+LPxwd3yYzTtDYWx6H+h1Ms1Ibrug+aGvA78p+zyKvo/eTW6bwX19Ykeuwrt8yvORQeA+TWe4/xfDFXzf0vF1y8Dx6OZpJZf6wL/Qgfr7k4SGosmF02Mriejg/wElEhWoDP0aknsFdQAcJe8WW19UENhNao3qsdHj3sP/T+9CLwUXV5Euwz2Be6LXqek1Ci4DO1KOafsPuf/4qia/8sbAxujOtTj6PghsXz6N3AKcF7ZbQNpP8hPRAf6SdFlNDrQz6Y9+bwQff0yOtC7a9EsOf1Rvi79j04s+3oasBZqWFRajuY7XAt8A/3vOv8XS6f8X94YOAn4L2CbhIOyMM5GM6s/CVyPksBS2g/0pTOM0vdvhgnTzHrgYOC31N96uDTUcCjqbXD+L47y/A90rDD1AeDWpCOyYG5Du5IB7Eu8E5fMLKz16dgQWI6WwJZqVTyC9rp/EHgMTShz/i+O22jP/x20oO7fvZOMxoJaDZ0ZTA4diJnF7gI0UfApNAfgCGAjqpcYdv4vnpr5fx00a3t00hFZUM8BnwgdhJnFbiiN7y3g/F9MHfJ/aQ33ptEdnjhSLA+jiUNmli+LaHzoz/m/mDrk/1JjYAYaN7JieQz97c2suJz/i6lD/i81BjYkOx+G7wN3oEpe89A6yXuAU8sesyVwE/Bk9Jh56Oe7Ho2VmDyG/vZmVlxZyf9bomqKdwO3AzcCfwU+U/aYvYBDevj6a0ev+zRaerlbg88bCvwFeBz1rhzbw/dPWtX8/yRaipIld6EJEHvWecyB0WOujfm9t0CNkayPsc1AY4WDQwdiZsGkPf/PQCd3jwGfomO+agU+B5yFJki+hfZk6KkhqCTzctp31WxEf+B76HiTleWZnfJ/aUvQ7UNF1AN90ZjYAtp7N6r5Efrj9LSlWMsv0e+s3hreLChttbte6EDMLIi05//PosqJ36H+hMjjaN+/pLfuR/X7u+vcKIZGJ26G1in/T4huWD1URD2wNYr5ui4edz/NWT73T7ROMw9mo641MyueNOf/U1FsRzTw2MGoV+CCXr7nCDTx8oc9eO5LxN8L3Wwr838r+jC8TzwtqqTMjK7vqPOYYbTPkn0hxvcehVpSt8f4miG9hOdRmBVVWvP/54FvAheiDZS68g4a67+ll++7Iyrh3N3XmYYaVH/t5fsnbWX+b0Xj3nPRGElWzIyu6zUGtkV/1Ntifu/tUNdaXhoDb6Ltgs2seNKY/9cCzkR7JnypG8+bTe8bAzujDaDu7ubzdomus9YYWJn/+0ZfZGl9aV90QH4XeKjO43aMrhs5aLcBX0ddTXPRhh+3oFYpqPFxEmoEbIS6kU6MLi8DR3bnB0iZOWR/IqSZ9Uwa8/+ZaJvz/6H2dsvV/AntpVJNVzm+ZGfgXuDtOu+zJfAV1OM8HA1N7IJ+j492I9406JD/j0I/fFaU5gt01QIrrTboar7AZugPuEfZbQPQzNWvVHn8Q2gpY178gsa64cwsf9KW/6egYYslxNdj2WiOXyV675PrvNYBwN/R1tIlZ6IGxqUxxJq0lfm/L9q2dnHQcLpnZnS9CvDnGo8ZiHoPXqD+fIFNUHfQIWjTjpIlwEWopsGvaf/9tKGKTWd0EePhaCLKT7p4XBosRvMrzKx40pb/D0A9sLcQz06p3cnxO0XvXetEc2/Uk7AlHedY3AEcXeN5+6Jjxvvo+NEHbR08r0c/TfxW5v++qIW0NGg43VPq/j+S2i3aXVCjod4uXH2B36GSjH+scv8i1AU0PXoMaGfHVqoPPayKJrwsRV1NWWklLkUJwcyKJ235f/3o+v4YXqu7OX5nNPxc7bgyAPgZWsE2q+K+idF1ZWNgH1SP5pSy236MVhxs1+gP0WQr838r0A9NmMiCvmg97CK09WYtO0XX9eYLHIVabOfVuH/d6Hpo2W0z0USbapNLXkWTXY5Ds1qzYgnZr5dgZj2TtvxfWtn0z24+r9pwcHdz/M5oeLla4+gLwBrAZVXu2xn4DyreVO5w4IsVt12PJrevUyOmpK3M/63o4NYnaDiN2wz94e6i/iYcpcbAbXUec2B0XesMfiM056D8D7wjakG+1VWgGdKPxjc0MbN8SVv+L+XW7gxdbAB8tMrt3cnxq6NyxLWGCGZSvZptK7AD1VcxnFPl8YOi67RM2lyZ/1vJ1pnhzOj6tjqPGYLKBb8IPF/nceuhNZZzq9y3FiopeRvwenTbCDT+VG85Yxb1R58BMyuetOX/R6Lr7lRFPRY4v8rt3cnxO0e312oMbIqOJ5WrGzZBcwGqPe8qVNq43MFo9UEc8yHisDL/lxoDA4KG07iZ0XW9uQAfQK2deo8BLR15osZ9R6Cus/KZptvQeb7AesD+XbxP2g3AjQGzokpb/r8I9VY0Wh55f3Qgnl3lvu7k+J1RKeHScvWjKh4/mupDF6VGROl4c0yN9xuFlqLPQyWW02Jl/m9F3TLDg4bTmNJ8gbeoX1+gkSEC0ASVahv0rI4+KKcA/yi7vbTVY/l8gWOAK7t4n7QbgfZ4MLPiSVv+fwrtKbMP8JEuHvspNHG7slZASXdy/KbouLIcGEfnSdVP0HkuwQB0YJ+NVq21AmOrvN9uqEegH3AnmqSYFh3y/95o8kPa7Y3GbG6u85iBqPW2ApWHrGc6agmWP251NJO0slUIcBAdx9eOpL2notKlaGVBFlwHfDt0EGYWRBrzfyvaAXAOKvZWOYyxJZqV/8EuXqc7Of5G4OLo62PRMEK5Q9BwQmljvMFRDOfR3mOwB/p91vNz4OwuHpOklfm/BXV/345aOSsCBlXLaWgZxjQ0H2AxKhbxZ1ShCjSx72toJurU6LanUDnLQ6hdxWoj4ITosZPQ7+CndF46AmoE/Aa1opaiLTVrlb68FI19/aCBny+0+9AH+hehAzGzxKU5/68DHI8OzH1QF/v7aFjgVzS2CqLRHL9edPsdqKrs76u81tFo6eOLUTw/i67PRsekd1ADoZ6JaO7BnugYElqH/L8m+hC4JG18stQz8DzZn/dgZj3j/J+8l4Ffhg4i8jxR/m9FrZzlxL/Nr6VfP9Rt9lzoQMwsCOf/5lgbDSvMrHLfAtoLFYXUIf+3oq6W/6Ca0FYsk1A31/OB4zCzMJz/m2MQWtZYuTV0HzSU/XCnZySvQ/4vTYZ4ls4TJqznhpCuQh61rIlaqWkpgGFmyXP+j99jaKXZworbj0IlitOwb02H/N83uvEftC+ds54ZDnwLTWLcFs1knYq6YE6p87yQNqTj8kkzKx7n//itQBvaHYImv7+NViAMQ0Xx0rBRUdX8/zmytw+z9Z5XEZiZ838xdcj/pWGCR9GmDd69rlg2wj0DZkXn/F9MVfP/AFQVKS3bKlrzDUcbVGxc5zGD0TrkY9GGG9W2ATWzbHP+L55O+b80Z2AJmt24HdW357X82RoVyXgs+n4w+mBsiZbDbIjOFFqAMeiz4q5Es/xx/i+eyvy/sjEA+hBsjyY9WP5tj8pyrg08gEp+LkMNgNYaz3k2mdDMrAnGo/XtpcuU6PqzOP8XTSn/Ly/dUN4YuBkte/AudsWwO3A52sv7S8B3gFWo3RAAbeDxBbR+9vno2pscmYU3CBUOKh3oJ5V9PwrNbn8D/c++GF0eja4X4PxfNKX8v1JL2dcD0B7LH6b2ns6WD2NQdayNKesmQq3F01BvwbiK5ywFzgSeQQlmMko4I1CimYMKbLwMvFZx/Tqdd/wys671Qf+LE9AOfauVXY+PLq2oy/dFdLB/Ce2iV/p+bgPv4/xfHFXzf0vFg65FZ4rHJxeXBXAwcDo6qFfbnGQj4IdoPexI9DmZA+wL3FPjNcegxLR6jet+0Xu9TsfGwitlt81GSc0s7/qjnrix6OBe/r9SOtD3RRvzvIYa2v+puLyK/nfejykm5/9iqJr/KxsDhwPfRWd9advByuJzFTpz+EoXj5tOe6NgENrZqzfbnbags5zyM5wJ0W3jUXIs7T++FDUOZtPeUJiNkt/r6CxmNprnYBZaC/r8jomuS5/n0gF/fHRf6fO9BH1+36S9QVzem/YGje3KFyfn/2Komv8rGwMj0YdxRzS5wPJnOEo8u9L4zOGpwJFoJ8akkkTpzKkysY5DybX0fd8opiWoO7T8Mq/KbaWLWS3D0Th7I5fBZc+bTcdG6xt0btC+m8hP0DPO//lXM/9XNgYArgOeBo5pflwWwKHA91HrP67uxTQYQP2kPbLi+5JlaAJV6fJWdFkYXc8ru610e+mxefr95UUbKvk6PLoeFt1W/v3w6FJ67Aj0+SlZSHtjcg61G5Rzyd+wlvN/vh1KjfxfrTFwACpRuDqwuNmRWeLuBO5AZ/mmnoURZZdhdDxojKy4rfzgUr4Z1Qq0TOctdPa3GB1U3kMNhyXowLEoum1edL0IDYm8Hb1O6blEt5UmXi6kbBlQhoyMrvug3xlo/sjQ6OuBaAhqUPT18Oj+0gF6cPTYftFrlZ47OLp/BR3z2HzaG3KlS/lt5dfzo69Lfx9z/s+7mvm/WmOgPxqz+gpwcXPjsoStA/wrun4mcCx5VDrglQ5ww9DBqw39Xw2JLv2j2/pFj+kbXUP7ARDaD5DQfkAEtehby64rLUP/2/V2zpxP9SGfETVes2RpFH+l8ljK4ystPS01fKC9YUR0vaTsekH02IXoYPQuOmC/F8VcajiVN5QsPs7/+dWj/P8j4PZmRWTB/BS4JXQQlgqD0Jl2+WVY3WdYUTj/51OP8v8U1BLfKu5oLJiR6Gxrn9CBmFmqTcH5P296lf8vAS6KNRwL6evAE9TvAjYzA+f/vOlV/t8CtQ7XjjMiC2IwKlByROhAzCwTnP/zI5b8fx1wfizhWEgnoL0Eqk38MjOrxvk/H2LJ/5uh1uH0GAKyMIai4ieHhQ7EzDLF+T/7Ys3/lwHXxPFCFsQP0XKSvl090MysgvN/tsWa/6eitb57xPFilqjpaO32zqEDMbNMcv7Prqbk/9NQ68JjztnRAtwK/CF0IGaWac7/2dO0/D8U7XL0nbhf2JrmIFTNbdXQgZhZpjn/Z09T8/+eqDyoJ5Ok33C0LerRoQOxTDkcOC50EJZKzv/ZkUj+vxBtbenJaOl2LjAL/52sa6sCZwE/AR7BG1hZbc7/2XAuCeT/NtRddEoz38R65VC0IYxb8NZdl+LGgNXm/J9+h5Jg/t8J7Ra2QxJvZt0yHX0QPh06EMskNwasK87/6RUk/5+KyhtOSPJNra7BwD+Ac0IHYpnlxoA1wvk/fYLl/1bgeuBveLlJWpyDPgyDQwdimeXGgDXC+T99gub/0cCzqHZ1S4gAbKXjgbeA9UMHYpnmxoA1yvk/PVKR/9cCXkOzkS2MT6IlP64QZr3lxoB1h/N/eKnK/5ujVslJoQMpoF3RB8ETBi0ObgxYdzn/h5PK/L8zCuozoQMpkC3QP6GLxFhc3BiwnnD+T16q8/8n0aYIHwkdSAFsCMxGO1KZxcWNAesp5//kZCL/fw59II4MHUiO7QDMA36OJ+5YvNwYsN5w/m++TOX//dCWl6eHDiSHPgy8g3+31jNDgSeBk2vcfz3ejMZ6x/m/eTKZ/2cC89Es09awoeTGoWhc7vOB47DsGgo8DXy/7LbhwBmo7vx84Hm0ZtmNAuupmTj/x+1QMpz/N0HLTi7GhXB6oxXVAn8Xj8eZWTY4/8cjN/l/TeCfqDKSN87pvrHAjcDrwI6BYzEz6w7n/97JXf4fBPwMjXU0dX/lnNkB7Ul9O7Ba4FjMzHrC+b9ncp3/D0E7Kv0eGBI4ljRrQf80S9A/Ub+w4ZiZ9Zrzf2MKk/+noy6jx9EkE+toPeBWNNb2wcCxmJnFyfm/vsLl/8Fo9vJS4AJgfNhwUmEIcBpqDV6Efydmlk/O/50VPv9PA25CS1COBvqGDSeYfYDn0JKv3QPHYmaWBOd/cf6PtACfQt0is4APkYGqSjHZCBV5eRv4Bt4X3MyKxfnf+b+TNuBHaILJLGB/8lusYgNU9nUZ6hKaEjQaM7OwnP+tkzHAqajr6HE0AzUv3UczgWvRh+Ay9KEwMzNx/rdORgDfQjszvQh8F5gYNKKeaQO+CDyCSkn+Blg3aERmZunm/G+dDAE+C9yPWlQ3oj2zR4UMqguDUNnIi1GhjaeBrwHjQgZlZpYxzv9W1YbA/6CNVJagD8axpKOltRpwOBoLWgi8iVqBMynOhBgzs2Zx/m+yzARapgXYCtgX2APYGJVtvAu4G7gX1cN+t0nv3xcti9kS2B7YFhWL+DdwAxoXugV4r0nvb2ZWVM7/TZLFxkCl8cBO6I+yPTAjuv0ZVO3quejyAvAGGoeaA7xV4/UGA6Ojyzg0VjUFWAO1QtdHy0CeAu5BH8I7UHeQmZklx/k/JnloDGyL/tCvRN8PRH+wGajFNjW6TAJWofPPvAB4H032qLxvHvAS6pp6Ds1wfQx9yBZGj9kU7f1+IJokYmZmyXD+j0nWGwOrAH9B3UWvNficUqtvWPT9CLSmtfSheAeYi1qPyxp4vTGoxfkasBeaOWpmZs3l/G+AGjJXo6pVoT0CrEBdUGcAfcKGY2aWa87/ttIXUdWqNPgMmuG6Ao1F/RNYO2hEZmb55fxvAEwGHkTjQ2nQBryOPgylyxzgJLI/FGNmlibO/wbol/tnNHM0Te6l44dhBSqreS8wIWBcZmZ54fxvK30a+FnoIKo4DO1AVfmBWI4mp4wMF5qZWS44/xugGaCz0AzQtKnWVbQCVaP6YsC4zMzywPnfVvoJ2skqrcq7it5H61O9O5WZWe85/xugClB3ku4JGYehdarzgWeBj4UNx8wsF5z/baXzgV1CB9GFNvRBOAxYFbgPFbQwM7Oec/43QOUl/xI6iAb1Lfv6DODgUIGYmeWA87+tdB6wY+ggemAVtBd3mru2zMzSzPnfAO0VfXfoIHrhbGDP0EGYmWWQ87+t9N/Ax0MH0QvTUJEMMzPrHud/AzT+8ndgQOhAeukWVELTzMwa4/yfoLTvrrQXWrN5fehAeul9YGfg9tCBmJllhPO/rfR7YPPQQcRgCPBA6CDMzDLE+d+A9i6ivMzEvAqNH5mZWX3O/wlL8zDBtqiIw7WhA4nJMGA6KkRhZma1Of8nLM3VkXYgX2Mst5O+bTfNzNLI+d9WugaYGDqImD0SOgAzswxw/k9YmnsGVgdeCh1EzGYDo0MHYWaWcs7/CUtrY2Aw8HboIJrgCWC90EGYmaWY838AaW0MTABeDh1EE7yMfjYzM6vO+T+AtDYGRgNzQgfRBLOBMaGDMDNLMef/ANLaGBgIvBs6iCZYQvZLa5qZNZPzfwBpbQxAfopNlGtFpSnNzKw25/+EpbUxsBAYHjqIJhgGvBU6CDOzFHP+DyCtjYHXgFVDB9EEY4E3QgdhZpZizv8BuDGQrPWAp0IHYWaWYs7/AaS1MbACrTMdETqQmE0DngkdhJlZijn/B5DWxgDAHcBOoYOI0UTgFVI8gcTMLCWc/xOW5sbAtcB+oYOI0X7A9aGDMDPLAOd/W6kFeJD8zCr9GymuS21mliLO/wnrEzqALgwAtgHuDh1IL+2CylBeFDoQM7OMcP63lQYCs4C20IH0Qgsa/1ordCBmZhni/J+gtPcMLAMWAIcD1wWOpaeOBBYDl4QOxMwsQ5z/rZMrgX1CB9ED6wAPAENCB2JmllHO/7bSKOAhYOPQgXTDWPRB2DB0IGZmGeb8bx1MQr/cGaEDaUAbcBewc+hAzMxywPnfOlgLeBjYJHQgdYwH7gE+GDoQM7Mccf63DtZEv+wvhA6kil3R7NcdQwdiZpZDzv/WwQDgLDRDc1zgWAAG+9OssAAAAkZJREFUA/+NKkyNDRyLmVmeOf9bJ/ugKlXfAYYGeP8+aNnLLOBLpLu8s5lZnjj/Wwf9UJfRI8C3gdUTeM8R0Xs+APyA/JTMNDPLEud/62QQcAhwK3AN8Ani7bIZCuwFnIs+BMcBq8T4+mZm1jPO/73UEjqAJlkH+CiqCT0U1bZ+BHgyuszv4vmDoteYBqyPJoT0R2Ulr0YTWMzMLH2c/3sgr42BcoOAbdH61HWiywhgRXT/YmA5HatELUEfmqeAx4E7gbkJxWtmZvFw/reGDUCzQc3MrFic/83MzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzKyIilB0qFFrA4cCb6N6188C54cMyMzMEuH8bwBsDVwGjCy77QJgSpBozMwsKc7/BqgC1XPAmmW3jQFmA2sFicjMzJLg/G8rfQW4MHQQZmaWOOf/SGvoAFLgCOCh0EGYmVninP8jRZ9A2IomjByMuosGoF2sJgI/AZaFC83MzJrI+d9Wmoy2sjyZjpNH9gEuDxGQmZklwvm/TNGHCYZH1wuBeWW3XwtsD8xMOiAzM0uE83+ZojcGXomuH6u4fQXwNLBzsuGYmVlCnP/LFL0xMBeNGb1b5b6lwKRkwzEzs4Q4/5cpemMA4Blg9Sq3D4zuMzOzfHL+j7gxAL8Etq24rR8wHbg5+XDMzCwhzv8FNRR4Es0eLRmAxoxWK7vteOA3yYVlZmZN5vxfR9/QAQTQCvQp+34JmijydeBNNMN0NvC55EMzM7Mmcv43MzMzMzMzM+vk/wNcpdr4vBIxawAAAABJRU5ErkJggg==) # Transition matrix: # $M = \left( \begin{array}{ccc} .6 & .3 & .1 \\ .3 & .5 & .2 \\ .1 & .3 & .6 \end{array}\right)$ # Initial probabilities # If we have no information about the recent weather, then 35% of the time it # is hot, 37.5% it is warm and 27.5% it is cold. # $\pi = \left( \begin{array}{c} .350 \\ .375 \\ .275 \end{array} \right)$ # # Add Observations # ![hmm.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAzAAAAPJCAYAAADNlc3KAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nOzdeZzbVbn48c/MdC90gdJCWcq+LwUEKTsICCJwBRURRVFBUC+CXkQFFC+I4IZ4UUAUELmI7IIKyKYCotyyyFL2nSJLCy3Q0j2/P57kl3TI7EnO95t83q9XXjPtJJlnJpnkPOec5zkdSJKUPysAU4EJwFvAPKAArA5sD3wTWKl4HUmSJElKai0iYal2WQgcmS40SVI9DUodgCRJ/fRn4BVgPWA54HlixeUc4JmEcUmSJEnSUtYCTk4dhCSp8dpTByBJkiRJvWUCI0mSJCk3TGAkSZIk5YZF/JKkvBsPvIco5n8R+BMwJ2lEkqS66UgdgCRJ/bAckbSsDawL3AncAawKXAw8AjyXLDpJkiRJqrAm8BiwcpWvvRdYAOzd0IgkSZIkqQsdwLhuvv4n4ElgSGPCkSQ1ilvIJEl5VADmdvP1VYCPEtvI7mtIRJKkhrALmSSpGT1f/Lh10igkSTVnAiNJakalAv4Nk0YhSao5ExhJUt4cBrwM7NXNdRYUP46qfziSpEYygZEk5c3ewARgSjfXGVP8+Hw315Ek5ZAHWUqS8mYacEXx0pUNih9vqX84kiRJktS1XYGjerjOX4AXgGXrHo0kSZKkljcIuBp4BNi+ytcvoLzK0tneRJvlfesTmiRJkiQtbWciCSkA51T5+gTgMuDATv//IWA68Nl6BidJkiRJlZYFbgQeBbbq4jqDgBOBu4hal98Tyc56jQhQkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJLawtdQCSJPXCcGAkMKp4GQnMB+YUL28Bs4ElqQKUJDWGCYwkKSvGAxsB6xUv6wPrAqsD7Z2u+w4wBOjo9P9zgMeBx4qXR4v/ngbMq1PckqQGMoGRJKWyAvABYFdgO2AtYiWllIA8Wvz4LPAmkZy8DcwCCsX7GEasxowGlgHGEsnPusAGxc9XBxYBU4E7gBuLHxfW9aeTJEmSlHsrAkcDdwGLgaeBc4BPEglMPQwFtgWOBa4jEqFZwCXA3sDgOn1fSZIkSTnUBuwJ/IFYCXkQ+AaxXSyFYcBewK+IROZV4HRgUqJ4JEmSJGXAIOBQYhvY28DPgc2SRvRuw4ADgb8RydUVZC9GSZIkSXXUBnwceJJY3TgOGJM0ot6ZDPyWSGSuBNZJG44kSZKketuUWM2YBXyNKLDPmw2Bq4iuZacCI9KGI0mSJKnWBgEnEGezXEAU6+fd7sAjRFe0KYljkSRJklQjqwD/AF4E3p84llobBvwQWAB8G48fkCRJknJte+Bloj3x2MSx1NMuwCvA74FRiWORJEmS1A8fAt4BTgHaE8fSCKsB9wD3AeMTxyJJkiSpDw4h6l0OSx1Igy0D3Aw8SmydkyRJkpRx/0EkLx9LHUgiw4hDOR8GlksciyRJkqRubAvMBT6fOpDEhgN3FC/DEsciSZIkqYrxwHTibBRF04LHgLNTByJJkiRpae3ATcCtQEfiWLJkM2JFqlW300mSJEmZdDgwg+Y4oLLWjiR+NyukDkSSJEkSjANeAz6TOpCMaidqYS5MHIckSZIk4CzgTjyFvjsbAwuBLVIHIkmSJLWyVYF5wM6J48iDi4FrUgchSZIktbIzgdtSB5ET6wOLiNUYSZIkSQ02HHidOLhSvXM9kfRJkiRJarBPAP8GBqcOJEf2B2bi4ZaSJElSw10FnJE6iJwZDMwC9kwdiCRJktRKSgPx3VMHkkNX4DYySZIkqaF2BOYAQ1MHkkOfAx5JHYQkNav21AFIkjLpPcB9wPzUgeTQP4F1gWVTByJJzcgERpJUzWbAv1IHkVOPAAuATVMHIknNyARGklTNBsC01EHk1CLgUWDD1IFIUjMygZEkVTMReDF1EDn2ErBS6iAkqRmZwEiSOmsDxhNnwKh/XgYmpA5CkpqRCYwkqbNliDbKb6QOJMfeAMakDkKSmpEJjCSps9J7w5KkUeTbYnyPlaS68MVVktRZW/GjCUz/LcH3WEmqC19cJUmdLSp+HJI0inwbRKzCSJJqzARGktTZ28A8YPnUgeTYOOC11EFIUjMygZEkVTOTGISrf8YRv0NJUo2ZwEiSqnkVWDF1EDm2IvE7lCTVmAmMJKmax4H1UgeRY+sSv0NJUo2ZwEiSqnkE2CB1EDm1MjAKeDR1IJLUjExgJEnVTAM2Sh1ETm0IzAZeSh2IJDUjExhJUjX/B6wGrJI6kBzalvj9SZLqwARGklTNs8ALwHaJ48ijHYDbUwchSc3KBEaS1JU7iMG4em8wsA3xu5MkSZLUQIcBT6QOImd2BuYAwxPHIUmSJLWcCcBiLObvi58AV6cOQpIkSWpVdwLHpw4iR54GPp06CEmSJKlVfQ2YmjqInNgCWASskDoQSZIkqVWtQgzKN00dSA78FPhj6iAkSZKkVncj8OPUQWTcEOA14IDUgUiSJEmt7kBgBjA0dSAZ5u9IkiRJyoihxOrCQakDybBbgR+mDkKSJElS+A5wP9CWOpAM2pKoE1ozdSCSJEmSwgrAXGDX1IFk0G+BS1IHIUmSJGlpZwN/Sh1ExkwCFgJbpw5EkiRJ0tLWIQbrW6UOJEPOBm5JHYQkSZKk6s4H/pw6iET2ByZW/HsNYD6wY5pwJEmSJPVkEjCP1quFWY4o1H8D2KP4fxcD1yWLSJIkSVKv/BS4K3UQDXYo8A6whEhkfklsp5ucMihJkiRJPVsReJM4vLFV3EYkL4XiZTFxcOXE7m4kSZIkKRu+BrwAjEwdSAOMJVZdCp0uC4DZwF7pQpMkSZLUG0OAR4GTUwfSAJ8lto91TmAKlLeU/Q8wOFWAkiRJknq2O1HQv07qQOrsVpbePlbtMo+oCxqdKEZJkiRJvXANcAPQljqQOil1H+sueSklMP8GVkkTpiRJkqTemAi8TnTpakbdbR8rXRYCVxHJjiRJkqSMOwyYRXOuPnTuPlZ5mQ+8DRycLDpJkiRJfdYG3EjzHerYVfexUvJyK7BysugkSZIk9dsaxGrEJ1IHUkPVto8tKF6+TPPW/UiSJEkt4XDgLZqnK1nn7WPzgfuBdVMGJUmSJKl2LgXuJv9nolRuH1tUvJxO/n8uSZIkSRXGAM8B/506kAE6knLy8hSwZdpwJEkl7t+VJNXaTsBNwKbAo728zVBgElFLsyKwfPGyQvHjqOL1RgJDOt32LSLRWAzM7HSZATwPPAu81Ief4dliPOcAXyFqYSRJGWACI0mqh8lEvUhnywGbAZsULxsQSctK1P89aR6xOvQ08GDF5RGiML/SKUTydXGdY5Ik9ZEJjCSpXgYDmwPbAtsB21D9rJhXiBWPZ4ofp1NePZlR/Pyt4nVnE4X1lYYDw4B2yis3lSs4pZWdNYDVgRGdbr8QeBi4E/h78eNzffxZJUkNYgIjSaqljYG9gD2JhKUyWZhFrMo8CDxQvDwMzGlwjCsTcZZWgjYFNgQGVVxnOnALcD2xHW5mg2OUJEmSVAeDgQ8AvwBeYOkzUx4HLgQOAzYiVkiyaiSwC3AC8Cci2Sr9HIuAu4pfWz9VgJIkSZL6pwPYHjiT2P5VGujPIVYrvkxs28qzDqLz2HHAHUSDgNLP+TBwEiYzkiRJUqatA5wGvEx5MP8acC6wG+/uDtZMJhArSTdRPh+mQKzMfIZYwZEkSZKU2GDgIOBWyqfSvwn8CtiDpetGWsV44pyYv1H+ncwGfk7U1UiSJElqsFHEVrDnKa82TAUOB5ZJGFfWrEusSlVupbsD2Acb5UiSJEl1tyrwE2KVpUAc4PgLXFnoyRDgo0Qb5lIi8xBwKLGKJUmSJKmGViBWEt6hXNtyGjAxZVA5tSVwEXG+TIE42+ZwWnO7nSRJklRT44AfEB3ECsT5J18iDofUwKwNXEA5kXkU+BhuLZMkSZL6bDCxKvAa5RWX4zBxqYc1iE5tpe5lU4k21JIkSZJ64QPEakABeAs4HtsAN8JGwB+I3/sS4DfAykkjkiRJkjJsFeBaYgC9GLgQa1xSeD8wjXgc3gaOIQ7OlCRJkgS0E+eWzCYGzf8AtkoakQYB/wm8QTwmdwObJo1IkiRJyoB1iAMXS9vFjiISGmXDROAq4vFZAJxCtGSWJEmSWs4hRNJSAK4HJqUNR93YB3iBeKweBDZJG44kSZLUOOOAqymvunwmbTjqpbHApcTjNgc4Im04kiRJUv3tSJzlUqp1WTttOOqHQ4A3icfwGmB02nAkSZKk+jiaODRxMXAynvyeZ2sSCWgBeBzYOG04kiRJUu0MI058LwAzgT3ThqMaGQScRjyuc4FPpQ1HkiRJGrhVgPuJQe69xKnvai6HEAlMAfge0JY2HEmSJKl/JgMvEgPb3wDD04ajOpoMPEs81r8jVt0kSZKk3NidOJhyCXASzsq3ghWJAy8LwF3A+LThSJIkSb3zKaJYfz6xvUitYyTweyKJeQxYLW04kiRJUve+QKy6zAZ2SRyL0ugAziKSmGexVbYkSZIy6lgieXkD2CZxLErvOCKJeRnYJHEskiRJ0lJOJAarrwCbJo5F2fEN4nnxGrBZ4lgkSZIkAP6LGKROB9ZPHIuy5yhiZe4VfH5IkiQpsc8Qg9NXgQ0Tx6Ls+jKR5L4IrJk4FkmSJLWoQ4DFwOu4PUg9+zaRxDwFrJw4FkmSJLWYXYg2ybOBrRPHovw4nUhiHgRGJ45FkiRJLWIjotPYAuLASqm32oBfE0nM9cCgtOFIkiSp2a0EPEcMQI9IHIvyaTBwM/Ec+mXiWCRJktTEhgNTiYHnyYljUb6NBaYRz6WvJI5FkiRJTepCYsB5KbEVSBqINYjzYRYSNVWSJElSzXyBSF4eAEYmjkXNY1dgETADWD1tKJIkSWoWU4iOY28AayeORc3n60Ry/A9gaOJYJEmSlHOjgWeJwyo/lDYUNak24HIiiflR4lgkSZKUc5cQA8vvpw5ETW008AyRKO+VOBZJkiTl1CcpHzo4LHEsan7bEfUw04FxiWORJElSzqwOzAbmAhumDUUt5GQiab4qdSCSJEnKjzbgz8RA8ouJY1FrGQT8k3juHZg4FkmSJOXEp4kB5F/wvBc13vrAPOKMGLeSSZIkqVvjgFeJAeQGiWNR6/oOkUSfnzoQSZIkZdtviYHjN1IHopY2FHiY6Er2vsSxSJIkKaN2JJKX+4laBCml7YgE5iF8PkqSJKmTdmAqkcDslDgWqeRi4jn5pdSBSJIkKVsOIwaKV6QORKqwMvA28DoW9EuSJKloWeBlonB/zcSxSJ19i0iuf5o6EEmSJGXDicQA8fTUgUhVjABeAOYDkxLHIkmSpMTGENtzZgPLJY5F6soRRJJ9XupAJEmSlNYpxMDwpMRxSN0ZDDwFLALWTRyLJEmSEhkHvAnMBEYnjkXqyWeIZPui1IFIkiQpjZOIAeHxieOQemMQ8ASwEGthJEmSWs4wovPY21j7ovw4kki6f5w6EEmSJDXW54mB4BmpA5H6oJR4v0k0oJAkSVILaAOmEQXRnvuivDmJSL6PSxyHJEmSGuT9xADwstSBSP2wAjAXeA7oSByLJEmSGuAKIoHZKXUgUj9dRDyH90odiCRJkuprArAAeJLYSibl0U5EAnNV6kAkSZJUX18nBn7Hpg5EGqBpREvliakDkSRJUv08DswHxqcORBqgY4lk/GupA5EkSVJ9vIcY8P0+dSBSDaxEdNK7N3UgkiRJqo/vEwnMQakDkWrkNuI5vX7qQCRJklRbbcAzwDvAqMSxSLVyBJHAnJA6EEmSJNXWNsRA74rUgUg1NI4o5H8gdSCSJEmqrVOJBOZjqQORauxm4rm9ZupAJEmSVDv3EQXPy6UORKqxrxAJzBdSByJJkqTaWBFYAtyROhCpDjYgEpjrUgciSZKk2jiUGOAdnzoQqU6eAt4GhqYORJLaUwcgSU1gj+LHG5JGIdXPn4GRwA6pA5EkExhJGrgdgFlEHYzUjG4tfjSBkSRJyrnVie1jf0wch1RPE4nn+c2pA5EkV2AkaWC2K378e9IopPp6CXiWOO9oUNpQJLU6X4SkbBkMfAr4INGSd2Tx8kfgLGBOutDUhVICc2fSKKT6uxM4GNgMuCdxLMoe378kqQWNAa4E9gU6Kv5/dWAa8GLxc2XLVMpv1lIz+wKeB6PqfP+SpBZ1PrBqF1/bhhg4PNC4cNQLHcBc4g1aanbbEq9D56QORJnj+5cktaARwDvEtozBVb4+qPj1ArBOA+NS9zYkHpNLUwciNcCyxIGtbpdUJd+/1HAW8UvZMJY4IG4yMK7K1xcBbxQ/n9CooNSjTYofH0wahdQYbxGF/JsAbWlDUYb4/iVJLWxHuj5jYRCwmJj9XLZhEaknpxCzivukDqSojdhrPgeYAbwGzARmE1vdFlI+z6PSVsAC4M1Ot3uTmDldBHy1i++5CjAfmFe8/kziTJx3gCdr8DMpW64hnvNrpA5EmeL7lyTpXXYhBg3XpA5ES7mceFzWTh1IhVHAFpRjKwAvAAcQcQ6pcps2YHliS9z0itvdAGxPnAHS3Yz7GsAXiUFKAbga2BNYZsA/jbLmVOIx3jN1IMoN378kqUXdRMxqd1UkqTRKHciq7ftO7VOUE5Ez+3C7Kytut0cfv+cLwI/7eBvly+HEc+OI1IEoN3z/Us1ZAyNlWzuxTWl1YAoxQFR2rE6sWCxMHEc1Myo+H92H271S8fmYPtxuW+J3cVwfbqP8eab4cfWUQSgXfP9S3XiQpZQ9awJHApOIA+MuJIpm5yWMSe+2LLHt6qHUgXRhZsXn1QpruzK/4vPle3mbNuBk4BCymcypdkoJjDUwqsb3L0lqcW3A+sAxxFalvdKGo042IbbSXJg4jq6sQ3kr2F29vM1EouC/dLvje3m7Q4jaCDW/IUSt0z9TB6JM8/1LksT7iAHl6akD0f+3B/GYfDd1IF1YjnIi8ngvb/Mb4KyK2/WmnmUUcDcwsh8xKp9eAZ5LHYRyw/cvSWphNxFvAvulDkQAHEQ8Hl9JHUgX2ih3BZvZw3Uh2qCeDXyQcgJzYS9u9xOiw5lax8NEq26pt3z/kqQW9UX6th1I9fWfxONxSOpAujGDiHEJ0NHN9TqA24hVmymUE5jrerj/TYA/DTxM5czfiOfH8NSBKDd8/1JN2YVMyoYdiXqDFbu5zrPFj5tT/SwPNVapwL03qxuplDqRtRGnZXflS8AlwOss/fMs18P9/wg4ut/Rhb2Ba4ltaDcDl9H7DleHAf8AHgT27/S1vYC/EgOmu4HPV7l9G/Ez3AY8AHyDpRO9FYtfv7V4nfuIsyw27XQ/44ATiVnmW4B7icRuq17+HHlTeo70tsmDmpvvX5LUol6l5/M69i1eZyEwrBFBtYDtiEHpCcB6fbztT4nHY0qtg6qhOymvpnT1840nBt6lCa3lK27zSDf3fTADK9xfEbieeO5XJh+bEnGvQhT/3tHF7TcHfkbEfTyxylQ6Cfw/gSeJFSKIn+lZ4EOd7uOU4nUBPk78zN8q/vsQIjnamXJSswxwD9FRaePi/+1ZvN4elA/7HEokPYvJ9vOjv35J/K4268NtBhO/qwvo+jFVPvn+JUkt6n7iTaC7gwOPId4A7m1IRK3hYOINdR4xAH6KGMCu34vblgZxnWfjs+RaysnItl1c53yWXilop1w782oXt1mWGLT3t3B/EtGOdyblJKPSIcDTxRi6Kha/jGggALBr8bo3EgnDS0Q715JfFb9+RcX/jScSpJKtitd5CfgccDnVjxr4cvF6vyX2899A9dWtgyti6k4bcBTwd2Aa8ARwFdmuFTiT+Nm26eF6paTlQuBNYAHx92ar7ebi+5cktagzgff0cJ1LiDeAb9c/nJZxMFGMXKi4lJKZZ4CTgA26uO1Fxev3JtlJ5QLKP9e+Vb4+BTi3yv+XamcWUV5VqPQj+l+4P4pYHSkQqx5dXWd+8ToXVPn6KsD/Vvy7VLczh1hRm1Lla4VOtzkGOLbi35+quN7f6HqWuHQS/QxiJaGr7TD7UU6IujKCaJxwEOUVsPHA9ynXIHW39S+VHxDx7VTlax3A9kQ3u1mUE5bKvzETmObi+5ckNbFBwNXEtpztO31tArG3vqvByobEYPIB3D9cS9USmMrLXJZOZjasuO3vitepnOnPmh9S/lkO7fS1dmLrWLU6hscqbje609c2BP44gJguLt5vd9uIRhPP9wLwySpfPxr4aMW/j6Qc7392uu4k4m/uHmDtiv+/nUiESkqrCm8DK3cT2xmUB+EbdXO9o+l5G94ZdL0y9tXi7W/u5vapnELEVppx703SYgKTX929d4HvX5LU1Ham/AZ+TpWvv4coAP4wS7/Ib0qc43E3Sw/ANHA9JTDdJTM3F/9/lc53miHfoBz/f3X62pFUL2yH2M5Uul3nE9dvANbtZzzbVNzv57q53n4V16v2+72N8vYxgPMoJwvddVurdHKnf99evI+f9nC7UjvYy3q4XilRu7qLr69NtKDuyiBiO1lXq2cpfYuI6zh6n7SYwOTXznT/3gW+f6nBqu3vlVQf9wB/JmaEf1Xl61OBDxAzyJcQbwKjiBnhnxMDhUUNiTTf2omZwNHFy6iKy7LF/xtDFFpvRu9nBEstY1dn6W0QWR6MVXYUq1xpGUcUtO/Zxe1mVHy+PJG0ARxI7GHv7cGYnR1f/LiQqPPoyvuKHx8HXqzy9bOImoqS9xY/XkbU7/TGiRWfjwS2Ln7+525uM6jie3XXProN2L34eVcrKAcQdTZdWUSs8h0PfISoZ8qK0tjhtH7evoMYCM8itmy+RTyes4DZFf9+E3iDeA1UOj29d4HvX2owExipcd4C3t/DdeYT2360tKHARGJrz8pEB6txwApEzcC4TpfKuo23KQ+GSpfSwKmvFhGDr7eIFZkVyXY7+spEZFzF598lBvBLurhdtVbKyxAF7Lu/++q9Mory8/9momVzV0oJzK1dfP3Kis9XpdwI4IZ+xrYzMeBaTNS/dOU9RBIMMdvclW2I5yVEp7VqNiZWyL5F16s+jxY/Zq3OqpS0P0kk9AXi76IvfwttxOreMMoTDGMqPq+cWJhHPJdnEMXir3X690vAdODfwCvFeFQ7vXnvAt+/1EAmMJKyYBxRS7ImsBYxKFqJGJyuRCQqAO9QHqjMIAYy0ygPZkr/N5OYyZ1F94OZg4nBZnevhQuKX38duJSYFb+TmO3/MJFcZVW1FZitiMHjP/t4uxOJ4v3+nsC+C9GVCrpOTCCSwg17cb2SvYsfXye2qfRHaXD2fyy9stPZrsWPTwAvdHO9g4sf7ya6qVUznFgN3JWuE5i5xY9ZG5CXHsejgL8AuxE/835EEjOI7pOZxXS9fbGklNiMZenJicoJi3WK/16ZqMNoJ/5eXyZW7l4qfny6eHmKWE2c38ufU1JGmcBIapRRxEzyhkRnr7UoJy2jiVnd54iBxjPE4O9KIlkpDUa6m7Wvpa6SlkKn60C2i1I7JyJtRO1HtcL4rm63HPG4bUTUPPTX6hWfP9jN9UqrL4vpXQH7PsWPN9H77WOdlVaVultVgXIC011cgyg3GLiom+t9hXhOdbeVbq3ix4d6iKvRSs/5+cSkwnXFy3D6nsx0ZV7x8irRVKIng4gkpjTpsSqxarsKMUmxJpHsFIhJkFJC8wRRO/Uw8brjNicpB0xgJNXaCKJwc1MiUSklLKsSScoTlM+7uJHy7OgLpBk8lAZXpaRlBtFu93dEEtXV7HdpFjcvCcw44DNELcVrPdyucw3MD4kB90CMqfi8uwF5KYG5i6h/KDmUd7dUHkqs7ED/O6OtRHmLVncJzFDi4NOerrcHsWK4kHgOlaxBnFj+6+K/nye6kHVn5158vxRKq46dVzIqk5lliNWxjxO1VgVi5aZeWy4XEYnJ9G6uswxLr/SuSSSlXyKSnQVEsvQI8Ro1jWjL/RTZWwWTJEn9NIoYlB1NzDY/RAwk5hNv/L8lipAPIJKYwdXvJpkDiYHJy8T2qPdS/dyTas4u3ran8w9SGkK5e9AbRMLYmwHk/hW3exw4tQaxfKJ4f7O6uc4QYqWtQLTqLZlI9VWP3YvXXUK55qSvPl68j3fofjtg6bDMxZTrgqo5v3i9azr9/6nEeS+9tWrxe02j953VGuXn9O25vwzxt3Yt5VWbrBlDnBn0WSJhv55YES4Q21H/AvyYWL3ciOw9JpIkqYo2YjXls8Qg7RFi4DiXmC3/OdEadwuyvSpRaRjRiay3SUulU4nBTX+L2hvlLcqD/Ck9XLdkJ8oJzPNEl66BmkzPCUzp7JQCMeAt+SpRb9FZ6UDF7up5evILenfeyneL15vaw/We593xDyY6OfXl7+LXxIrADn24TaNcSv/PQBpJNDDIi3HE3/hxxIraE8Tf0hzgr8D3iG2M47q6A0mS1DhDiNWVE4jtOa8Tb9wPA78ktvRsTOtuRS0dNPix1IH04FkizvP7cJuNKScSB9QwltLZOdUGvp8iBoivF68zueJrd1Nu5FDpvuJ1TxpATI9SPtOkO3cVr3d6N9dpJxL6zufnnAAc0YeYdiveR19u00ilx7HzIaetYhSR5H+d8pbMAvFcugA4jGgwIEmS6qydWD05lmhHO6d4uZUo/P4AXZ+03Io+TQxavpg4jp5MJbaP9WWL1UrEz9bdmSj9sQGxXe9KyonvKGIW+yfEc/C04vf+YPHrXwS+U+W+ViAS6gLlM1z6agLlRG2zbq43nFgNKQDb9nCfvyteb+Xivz8BnNuHmFYmfkdf68NtGu0+4vfRn5XLZrUe8ZpwHjHJUyC2oJ1PbFNcMVlkkiQ1mQnElrDLiMLthcQp7CcTBcRZbhGc2geJQcq3e7piYpcQM8J90UEMvuoxizwBuJB4nv2ZKPjeu+Lr7UTC8jBRs3Mq1QfKKxDbtX7Txdd7o3Qfl/dwHyOIrUO39OJ7jS3GdC9RN/Fdej7jwW4AACAASURBVP93NJzYDvfNXl4/leeJJEtdW4noxnY+5VqaB4nGDbuRn222kiRlwmRiS8s/iSLhJ4EziX3coxLGlTfbEIOSs1IHoqbQDlxNHGyZZW3ENrmstXbOunWIs2+uJOrSZhMTR4dg/YwkSe/STtSynEXMBC4iThv/GrGlR/0zkUhg/pA6EDWFM4nVmqxbkXje97dttWJF7v3Ea/KzxGvyHURd3arpwpIkKb2tidbALxBtS68iWoAu392N1GttxO/VmWgN1DFER7WufKNRgfTCFCKB+VnqQJrIZkS7+fuJVfHbidqvCSmDkiSpUdYjZnGfIops/0hsUXBrWH08SjQ6sJhZ/bU/PR9oeU4jAumlg4gE5tjUgTSpDYiueo8QKzM3ER0fl0kYkyRJNTeS6H5zOzF79xfgcFxpaYQbiMFcfw9RVGvbhlgl7c5kBtYmuta+STznP5w6kBYwmejQ9zxRN/MrYLukEUmSNEBTiJadbwIvEieXr5U0otZzFjGYy+Jhg8q2tYBXiGYad1W53A08RnQGPChRjNVcQDznt0gdSAtpB/Ykiv7nE6szx+IWM0lSTgwBPkIMcErbCz5C6x4mmdoRxGDuC6kDUe78ifIZND1dspQs3EO89gxPHUiLGkOssN9HJDOX0fPZRJIkJTGBOCn8RWLW9jRglaQRCWLgUADOTh2I1AAdRAvlaakDEQBbAhcR9Y5TiXrHwUkjkiQJ2Ih4g5pPvEF9Eg+XzJJlidPg70wdiNQAGxAJ+6WpA9FSViUmtWYS9TLHYtG/JCmBzYkDzxYR7Y8t3Myup4k6pPbUgUh19lEigTkhdSCqagSxvexxYAZwIrHlTJKkutqc2NO8CLiObO19V3WXE4O6jVIHItXZj4jn+gdSB6JutQP7EPVKbxKHpK6YNCJJUlOaTLTkXUB0+Vk3bTjqg2OIQd1hqQOR6uyfRKv2sakDUa+0E+cM3Uu0YT4ZzwSTJNXAJOA3RKvUi4DVk0aj/tiaSGAuTByHVE/DiVq8B1IHoj5rA/YFHgZeBY4iOlpKktQnyxFFl+8QrZA3TxuOBmAwMIfYdy41q52IRP3nqQNRv7UTbfefAZ4j6mWs3ZMk9agD+CLwOrE/ebe04ahGbiO6ka2UOhCpTk4gEphPpA5EAzYc+AYwC/gH0Y5ZkqSqphB7kV8BPk0s66s5lAZ3n04ch1QvdxBJ+sTUgahmxgG/ILYw/xxrmyRJFZYjusCU6lzGpQ1HdbAlno+h5jWWeP26N3UgqostgLuIc2S+jNvKJKnlHUJsF7uT6DSm5tQGvEw81oMSxyLVWun8l++mDkR10w4cQSQxtwPrpA1HkpTCSsC1RA/+w3G7WCv4NTHI89BRNZvzief2jqkDUd2tAPyOaEziaowktZCPEKcg/w1YO3EsapzSLPUPUwci1dBg4DViZt7VxdbxQeAlYveA55JJUhNbHriKODDsSFx1aTUjgLeB5/GxV/PYk0jMz0sdiBpuPHA58bp2ROJYJEl1sB3RV/8OYI3EsSidy4jB3pTUgUg1cgHxnN49dSBK5mNEy+XLgNGJY5Ek1UAbsU94HtFpbHDacJTYh4nB3hmpA5FqYAixdew13D7W6iYRncqeBbZJG4okaSBWAG4gznV5f+JYlA0jiC2E03HAp/zbj0jIz00diDJhCPATYsLuqMSxSJL6YTKxZew2PNhNSyt1bNo3dSDSAP0Bt0Tq3f6DaBl/MTAscSySpF76IDCbmJV0y5g6m0IM+q5LHYg0ACsDi4BHUgeiTFoLeJg43HTVxLFIkrrRBhwHzAe+lDgWZdv9wGJgtdSBSP10IpGIH506EGXWssDviS2zWyeORZJUxTDicK/XgJ0Sx6Ls+zIx+DspcRxSf3QAzxC1DuMSx6Js6yDOvppLnIUlScqI0UStyyPAmoljUT6MJc5OeAX3iCt/St30fpM6EOXGobg7QZIyYwKxx/f/iK5jUm/9lBgEHpY6EKmP7iKeu5unDkS5sjcwBzgtdSCS1MrWAB4HbgFGJY5F+bMGsBB4FGhPHIvUWzsQycuNqQNRLm0NzCC6MdpKXpIabCPg38Bvid73Un9cTgwG90kdiNRL1xDP2T1SB6Lc2oQo7L8UkxhJapj1gZeBX+DMuQZma2IweDfRxU7Kss2BJcB9+HzVwKwBPEtMAnakDUWSmt86xMzRBZi8qDb+iAdbKh+uI56r+6UORE1hEtHN7ne4EiNJdTOJmDG6FGeMVDtbELPaD2BSrOx6D/E8nYqrL6qdtYEXgQvx9U+Sam5V4DngEkxeVHtXETPbH0kdiNSFG4jn6AdSB6Kmsx5RU3pu6kAkqZmMAR4iildd5lY9bAIsBp4AhiaORepsDyJ5+XvqQNS0NgJeB76dOhBJagZDgJuJIuuRiWNRczuPGCQemzoQqcIg4EFi+9j2iWNRc9sBeAc4MnUgkpRnbcBFwJPA+MSxKH8OA97fh+uPB2YBbwIr1SUiqe+OIhLri/pwmw7gu8BadYlIzWxfYD42NZGkfvs+8AqwZupAlDsTiESkr92ajiUGi+fVPCKp75YHZgJvA6v08ba/J845kvrqGOAtom23JKkPPg7MBbZKHYhy6Tzgr/243RDgcaIexu06Su0CIqE+oR+3XQ9YgM9j9c+ZRNfPcYnjkKTc2BSYA3wmdSDKpQ2IgVt/k9+diXqDR4FhNYpJ6qudGfjz8GfAP7DtsvpuEHAbcAs2z5GkHo0lal5+ljoQ5dYNwMUDvI9fETPf/z3wcKQ+G050xBvoSuAKRF2X7cHVH+OBF4AfpA5EkrKsnTgV/S5sZav+2ZXoojNpgPczljgXYQGxIig10g+IBLoWEznfBJ7G11T1z+bEdu4DUwciSVl1AvAysHLqQJRLw4FHgO/V6P4+TAwiH8StZGqcnYmVl+eBUTW4v+HEIcAn1uC+1JoOI4r610kdiCRlzZZE68a9Uwei3PoRUS8wvIb3eRGRxPykhvcpdWUMkWwsJlYTa+V9xGriljW8T7WWS4CpwODUgUhSVowEHgPOSB2IcmsK9em4tAzRlWwJJteqv0uJhLlWq4iVzgXuJzrtSX01huhKdnLiOCQpM34FPERtZ87VOkYQScbpdbr/bYGFRE2MB1yqXg4jkpep1CfJGEk0SPlOHe5brWEHYqJo58RxSFJyBxBF1xunDkS59VOi9qWedSonEoPLO3ALhWpvK+J18E1g3Tp+n10YWItx6VTgGWpTnyVJuTSWKNo/JnUgyq3SgGzrOn+fNuAqIon5aZ2/l1rLckSXsCU0pt3xWdiYQv03GLgPjzqQ1MLOA+4GOlIHolxamdjWdVKDvt8Y4myOAnBwg76nmlsHcBPxnDqtQd9zJLFieUGDvp+az3uIiaPtUgciSY22I/ECuHnqQJRLg4HbgRtpbAK8CfA2sd1n2wZ+XzWnnxLJyy009nm8HnHA5Rcb+D3VXM4gVvJsCiGpZQwFptG4GUc1n18QhftjEnzvA4g2t68CayX4/moOxxDJy+PA8gm+/75E6/qdEnxv5d8IYuvj8akDkaRG+RbwFPECKPXVEcShaikbP/wXMfh8ElghYRzKpw8Ci4AZ1LdovycnE3WIqySMQfm1N7Ea7USOpKa3MjCHeAOX+moKMA/4aOpAiCLWArGVzWRcvbUd8RqYhW2I7cAfgH8QK+NSX10DXJE6CEmqt18Dt6YOQrm0FlG0X49D/vqjA7iWSGJuwAGgerYFUXuyCPhw4lhKxhLNKX5DJDRSX6xFTCrtmDoQSaqXzYkDAbdIHYhyZzzwGHAl2epaNwS4nkhirgEGpQ1HGbYusV1rCXB44lg6W5uYHLA1rvrjJ8C9mABLalI3E62Tpb4YTZw7cBPZXOVYBriTSGL+l2wlWMqGtYHpxHPk6MSxdGUT4HUa15ZczWMsUc/1idSBSFKt7UsUXq+UOhDlygiixuQu4vyKrBoD3EMMUH9HtHmWADYCXiKeGycmjqUn2xBtwr+aOhDlzjHA82RzkkmS+qWNGNx9N3UgypUhwJ+AB4gZvqwbQyRaBeCPwPC04SgDtgBeo7EHVQ7UbkSDgc+mDkS5MpRIYL6QOhBJqpV9iFk9282qt4YQnW0eByYkjqUvRgF/pXw44TJpw1FC2xMF+0uAoxLH0lcHEmfEZKXRgPLhi7gKI6mJ3E1+Zh+V3ghi5eVJYFLiWPpjBNGVrECsHq2aNhwlsD8wl+g2lteVjEOJJOYzqQNRbgwFXiTO6pKkXNubWH0ZnzoQ5cIYoublYeLMoLwaShT0F4gZyU3ShqMGOhZYTCQw+yeOZaD2I36O/0odiHLjKOA5YhVdknLrH8APUgehXBhPdBu7GxiXOJZaaANOIZKY2cAeacNRnQ0CziYe71eA96YNp2Z2Ad7EVXT1zjCiacVhqQORpP7ajtiCYOcx9WQ14pyXW4FlE8dSa58GFhDbiU7CsxKa0QpEm/gCcSjkOmnDqbmtiDa5P8fnr3r2NWAaMYkjSblzOXBR6iCUeRsDLxBF+81a/Lkr8CoxwL2C5kvSWtkUYt9/gajdykPHvP7YlDjs8n9p3r9T1cYYYuu4q86Scmc1YCExcyd1ZT9ie8o5NP8BkKsC/yQGutOI80GUX23AfxKrzEuA/6b5VyfWJjoD/h1X1tW9s4l28pKUK98nirGlatqA44jB33GJY2mkocCZRBLzDvGzN/ugtxmtAFxLub7pQ2nDaahRxM8+Hdg6cSzKrnWJbbPrpw5EknprBLFf2jMEVM0w4DfEc+R9iWNJ5WDijJDSoZd5Ouum1e1JbKUqECsRa6QNJ4kOoqj/HaLGS6rmRuB/UgchSb11KFHTMCh1IMqcVYCpxPkorTjwq7Q6sUpZ6lr10aTRqCejiCL2JcT22JPwNe4gos3yufi70Lt9kFihHJE6EEnqjb8C300dhDJnN6KQ/So8ob6kAziB6FJWAH5PJHnKln2JSZkCccDqlLThZMp7ie1k19Mc7c9VO4OI58YnUgciST1ZgzjEzX2vKhkCnE4M0k/E1prVbEycmVSqqfgCzd/UIA8mAr8jHpeFxJlWzia/20RiNXE6MVEhlZxOtBiXpEz7Dhbvq2wNok7gOWCHxLFkXTtwONGVrdSpbM+kEbWuIcCXiWSyAPwLOyr2pINyY44zsdWywrrEpOaaqQORpK60AU8Bn00diDLhEOAt4jygZj0box5WAy4jBs6lc2NavV6okT4MPE387t8Ajsb6jr7YmjjM80FiZVG6C/hW6iAkqSu7AHOIYle1rtHAJcRKwuGJY8mz9xJv/AVi+925wMpJI2pu2wN/IX7fi4lDeMenDCjHRhGdBucSK1lqbYcTtWOSlEk/J/aLq3V9AHieqOdYK3EszaAd+BzlAvK5wI9xYF1LOwF/o7zidQNx6rwG7pPERMbVRJ2MWtMYYhJmi9SBSFJn7cBLwIGpA1ES44lVl3lEof7gtOE0nWHETHbp/JE5wFmYJPZXO7AP0TGxlLjcRqzCqLbWJIq4ZwGfxyYereom7E4qKYO2Jwavbh9rPR8BXgPuADZMHEuzGwEcC7xMDLoXETVG26QMKkeGEita0ygnLn8Fdk0ZVIvwdaK1HQk8ljoISersDOIMC7WONYE/EzOrXyZmtdUYQ4gmCQ9RHog/THSBWi5hXFm1HnFy/KuUa1yuA7ZNGVQLWo6o5VpAPB52KmsdE4gJl41SByJJJW1Em9xPpQ5EDTEY+BpRj3EV7m1PqY047fom4oT4AvA2cD6wI62dVI4BPgPcSTnJe4No8WtL17T2Ap4lEvDt0oaiBrodu5FJypDNiUPenPltfh8EHgFeBD6UOBYtbRViBeY5ygP2F4kB+/a0Ru3BcKK25SKiTqj0e5hKdEIamS40dbIMsXK/gKifWy1tOGqArwD3pA5Ckkq+TsxyqnltAPyRWHU5DWudsmwQsC/wW+IcntIg/jngbGA/YvDYLNYCvgj8gXh+ln7eJ4ii4fXThaZeWJ/yY+drS3PbkFgptouipEy4BTgpdRCqi+WJGfwFxOGKqyeNRn01nDiY8TJia1lpcD+f+Lv9JrHVbHiqAPthIvBR4KdEUXCh4vIUcDq2a82j3YAHiEL/LwMdacNRnTwPfDx1EJI0gug+ZjFscxlCDCJmAf/Ex7cZDCUGiT8iCv4rB/4LiEMzf0yc3TGZeA6ktjxxQO5RxMGIz7B03O8ANwLH4EpLMxhEbPV7hXiO7pk2HNXBr4ALUgchSXsCs/Hcj2YxmGgz+yzwNNH6tBVqJ1rRasAniG1lDxKduTonNQ8ClwLfIwaWuwNrU9sVm+WJOrr9iT3y/0McJjm9UzwFYkvcTcSK7+7EBIqazxgi0Z5PdItzRa15HEj8bfu+0uR8gNWVwURbwuWLlxWKH5ctfn1s8eMQYs/pouLH2cX/n00s1c+suLxCDGL64kfEtqID+vEzKDsGEYPZE4lC59OIge38lEH1YHfgMGASMeCZRzQYOAf4S7qwcmsMsDWwGbBJ8bIhXa/EzKX82vEasVoH8Ti80+m6o4mOaIMpv2aVLoO6uP+ZxJaiB4uXe4v/XtS3H0s5tjbwbeBjRA3et4F/JY1IAzWWeL2YTHSh64vhxLinNN4pXYYTryOl8c8I4vUJYiJmDjH+mdnp8howo58/h3pgAtPaOogX8E2JIuo1iWRhdaLjUK33CC8AXiC2aTxb/PgwMWh4lpgB7WwLYmAytcaxqDHaieTzFKKL3M+IpPStlEH1YAyxKvA8caZEqavNWOAEYhb/98DBxBtXThVOAaZC2zUJgxhM+XVnjYqPK9G7JKQrlcnPq0S3tMrXnaeIwzoTKvwMuAza/po2DhHPweOIttg3EInMvUkj0kAcQBwgWy15GEacFbMpsC5Lv/ZMqEMsc4nXnNLrz1NEYvUvIsFRP5nAtI42Yg/3FKLeYDPij7jaVo2ZxB/aC8XPZxQvM4lC3YXFjxAzoR2UZ1FHFf89infPYkwiXiiqdYF5k/ijvo/YK39nMQblUylxORkYB5xF1D+8mTKoXuggkpOziIFMNb8mDnm8ktgCVy3xzoHCdOBL0HZ16kh6YXTxMohIeiq7nFWu/M4tft55hSaDCtcCD0PbN1JHov9vA+AbwEHE3/+JwP1JI9JADCYmQbcF3kskLevw7gmRAvASMeaYzrtXUkqrvvOK13+L2EnQXryMJsZYnVd/x1OeFK52qOrLxOrvPcDfi5eZ/f9xpeaxEfBV4FoiAanc670IeJToHnQCcebGpjSmxeRywJZEp59TiAHj05QPyitdphfjO5JIfJR9pRPcHyZeiI+nvOyeB0cSrXG7sz7l5+hBdY+oLgobQGEJFMaljqR1Fb4KhbtSR6GqNiEmKBYR7cKtkcmHQcAOxLjiryzd9rzUlOMeosj/K8D7iVWYaslFLbUR3Q23Aw4lJvNuJlaHK+NbAkwDziNqeTz7Ti1jBPAfxLaXysPmCkSmfxWR0Ewhm21NRwE7EwnVH4lTrSt/hkeIP/zdsag/a8YAXyO26rxMPIZ5PG/hBmJb2HndXKeDeCMsEIObHCocCQVnlpMqbAmFhVDIU4I/UG1Ex7e/EwO1J4j3pf1SBtWNzYhJtEVEW/AP4M6VrBkPfBa4nHePGR4jkpXDiAndvm5FbYQVief/94E7KL+3lCaa/06sBJpEq+kMI1ZPfsvS5zC8RaxsHEHUueRRO7AxkXTdRCzfln6+GcQgc3ey+aLUKlYiOja9QQxGvkw2k+Pe+gflmbDukuSnitf7UyOCqr3CpVD4SeooWluhAwpvQKFVWvmOIBp3HES8tkMMPr9P/C1dR7k5TNasTjQemQU8Tv5f5/JuHNG18GZikF8aF/ybSFg+Smxhz6OhxCrSd4nVosqdKU8CpxINCqTc2g64kNjvXXpyvwCcAbyPbJyxUGsjgX2IxGUm5Z/7VeLQuU3ShdZyJgMXEfVQdxC1IM1wMNyBRB3Wf/dwvTnEc+/UukdUF4XpUPhQ6ihUuBYK30sdRYOcQdfnPX2V+Hu6uXHh9MsoInl5geiqeRJR66D6G0Icmns98b5Tev+/n9iqvDnNuTo2AfgUcA1LT+I+RhwUvFK60KTeWx44mqUPiXuJGLxvT3lWqxUMBvYiZlsql43vIjrJjEwXWtMaTBTm30a8gVwKbJU0ojQmUX6+7ZU4ln6w/iU7WqYOZm2guxW/QcQKbgHYtyERDcxQ4n3mIWLnw1nEFiXV3rrA6UTCWHrdfZDYVrVewrhSGE3UmP6BOIKgQLwXX01sb2ylMaByYm3gTMqzvouJLVUfwe1TEG8mHyF+J6Xl1jeJ39mqCeNqFqsR3cReIla+fkxsqWhVJ5Dv7WPWv2RGy9TBHEfsGujOKcTf1W/qH07NtBGTGNcT78u3E+3V610U3gq2J7YVVr6nn1v8f8V2y8OJYyhKid3TxN/a6IRxSUAUtFf+Ab9IzDqsnDCmrFuX2FNd2mK2gNjq5J7RvmkHdiMKWBcS5/AcjieTr0nMuD5BnJWUQ9a/ZEfL1MH8hqgfOaqb63yCeM3+v4ZEVHsrE4PHF4hdAefiqkxfDQY+SWzjLQ3K7wE+x9Kt07W0HYCLKa/KvA58D7eXKYHtgFtZeo/n4UTBvnpnJPE7e5Ty7/Em7OTRkxWJN+GnKc94mfyF7YlzAi4jnx3Wiqx/yZaWqIO5gngN7u7Q1P2L17m7IRHVzxCW3hFQqhG0c2bX2onfUWkb4RLi97cPzVnXUi8TiLqs0gTufOI9fGLCmNQitiVaNZYG3LcQs+Dqvw7ihbE0o7OEGIBumDKojBlCtN++ilixuhf4PPk6v6UetiC6+/2V2H/9L2DrpBENmPUv2dMSdTCrAccQ9WNdOZZ4jT6/IRE1xvrEltuZxBbc72OzmUrtwMeJgvRSPcf5xIGi6r9lib+3l4jf6xziuefrvmpubaIIq5S43E5sH1PttBEF6A9RriP6BdHGs1VNAX5O+bThs4nTivVuw4mE5mJii8v70obTX9a/ZE/L1MH05I/Ea/PBqQOpg+HE1qibiPee+4iDFFt5i88uxO6S0nknF5HfIx+yajjR4a90YOabxA4La7Q0YKOIrLjUGm8qsEfSiJpfacbnSeJ3Pgv4L5qz7XQ1qxIvYI8Rbxo3EV1NWr22pS9OJJ47vyJ3XV+sf8melqmD6c6qxMB+Gs3Rjr07E4lWzPcQP/MdxHbnVqnxWItY7S9N2F6OKy71tgzRcnkW5fNk3EasfvskcWJ5qRXyoeRuMJRrQ4mT40vn6DxOHIrZjJYjtoTdQWyh+zvwBTy/oL/aiIFWgZ7Pi8kY61+yqSXqYLrza2L76g6pA2mwzYktZv8mZscvAHalOZO4YUSnudKE7T203uOd2nhi50np8M+bgXWSRqRcWR24gXjyzCM6RbT61oGUJgC/JGbCCsThoM0wsF+OWFm5jijke544SXrdlEE1kSMpb0V8T+JYesn6l+xqiTqYruxG/C0dkTqQhEpdHy8C3gJmFD/fh+Yo/t+Rcp3Ly8BnccI2pcnEeW4F4B3g63gkh7rRTiwbv0U8af5G6x3ClGVTKNfHvEKcyJ7CeKLl6Cf7cdvlKSctC4iOWWcS3bPs5FJb21PeAvHjxLH0kvUv2dWydTArEwPar6UOJEOGE4nLRcSqzEzKyUx/tjpfSSSHKRKGUURt5ZLi5TxgTII4VN2niGS5QNRlbZk2HGXRqpSz3dlEIuPsQ/YMJupDSkvcVxArGY3QBnyaeMMq7YvujRUoJy0LifbHJi398xHiDIdb6PlNdjXKCcy1dY6rRqx/ya6WrIMZDvyT2Juv6oZRTmZmEefLXES8VvWmbnFVyueh3U1ja022oVxr+hR2VM2q8cRzqtQF7iSacwuj+uFA4kWnQGwd8xDK7NuI2J9bILZe7VTn77cm8BfiTaY0KJ5P151CNiYSrTuJZOch4kXH9pwDcxPl3/9ePVx3s4rr/rbOcdWI9S/Z1lJ1MO1E581vpQ4kR0rJzK+JgwpnA5cQTWm62vZ8CDCX8uB0EXA69e1CNZiodVlErLr8iEhWlW37UK7Lvp0od1CLWoZyVvsOserijHh+DCFe6BcXL6dS+z2ig4CjiRWf0gm6pcsiYhUF4sX/A0TL42eLX/s7MXO5fo1jamXXUG7oMLKH6x5A+bE6vs5x1YD1L9nXUnUwZwLfTR1Ejg0B9gTOISbZFhGDzq+z9ETWhZQLtkuXecXb7FKHuNYkVnoKwHRcdcmb8ZTbmc8CPpY2HKWwNvAA8SR4GE8yz7NdiG1FBeJAwxVrdL+TiedIabta58s7xBa2y4htZW8T28QOp7XPDqinY4D/6eV1T6B8MGoOWoBa/5J9LVMHcwzwg26+/o1GBdJE1iQmSW8iVvJfJiZQX6f6+0tpYu4SardNeq+K73cVzdEMp1UdQhx+WQDOpXWOmWh5+1DutX0BLp02g3HAnylvKRvIaewjiLN/FvHumbHOl9LpuTthh5BaGE4cHvcZYjtGZ2OB54j6lp7uZzrxGF1cywDrx/qX7GuJOpj9gTN6uM45jQikiY0DPgH8nu7fX0qrMTOJAWt/tRFbARcTydNRA7gvZcdmlGuY/kbtJm+VQW3At4kZ2flEm1U1jw5iG9kS4kX/0H7cx15Ev//O28W6unRXB6O++zbl321Xg6S9iJqijbu5n/8u3scjwOhaBlgfhTYo/Nv6lzzIdR3MIKKu5RHK218rbUPUQ3RnMlHPp4H7JOX6l+4uS4j6mBuJov++WIZyovQS1R935ddYylvKpjOwyVtl1BCi9W0BeJF4oVZz+g/Kh1+eQu/qmsYTS/WLiTeL3iQvnetgNHClF+LSFoeubE80SDiDmIUqdQxclWgDWiA6leWknsT6l/zIdR3MznQ9QbAW0Z7+n8BdVS53E+eELAQOaky4Te8Cel7lr7wspFyv25suqROBe4u3vaP4bzWfdmJSYQmxK8SJyfv+8QAAIABJREFUsCYylnKL5Afo+wyG8mdjopi+AFxO99sEv8nS3cX6cplL1FqoNrYGphLbxLbq4bodxGFrlxL98V8kZqCuBParY4x1YP1LfuS6DmZZYhb/Ud799/Unev+6t0WD4m12pW2u/bm8BGzYzX1vTLyOFog6zWpbctVcDiDGJEtwlbQprEYslxeIF+g8vumofypnn/5K11uJ7iK2nHWeCZtPzGbMKX59MdXfSHp7HozUBetf8qMl6mBUf6XzX7pbaZlDDEgXdvr6guL/f7qL+96V8i6Ek7G7aivZFniNeOzPJPFj7xOv/9YGbgYmEV0avkQMUtU6lgF+R7Q4vodoazmjh+svTxw+Oa74+fIVn69IdBhbgVjZG0XUzEyqT/hqfoU2Yjb1C9B2depo1BuFa4GHoc1uXOqvHYmJtXeIc+hmEgPPl4mtfDMr/m9Gxb9nEglMVz5I7DroAI4Azq9P+MqwtYkJ+3WINt2fIyZglRPrE1tKCsBpiWNRWh3EH3EBmIYHlSpTrH/Jn1zXwah57Uf5zLL9E8eitMYD9xPjnmuw2VBuTAZeJR64HBxgpwbooFzc/TjWQSkzWqr+ZTRwK93v3c+BXNfBqDl9kthhMgfYI3EsyoblKB9aeh0mMZm3IZG8LCEO45JK2oCfUE5i7JmuDGj6+pcJxIDqW8RWuQKwZdKIBsw6GGXKgUTy8iaxNU0qGUVsVSwQLdQ9ry6j1qLc1ePYxLEou35IPEcexJOIlVTTn/+yE3HA2q+A3YlDX5sggQHyfR6Mmse+RE3MHOLvTepsBOUkplQfpQxZBXgat42pZ21EU4cC0X53bNpw1Lparv7lezRPAmMdjFLbjWgCMJ844Ffqyijg/4jX3wuwQVhmLEe5VfKpiWNRPnQAvyWeM7cSB51KDdZS9S/QXAmMdTBKaUvgbWL1ZZ/EsSgfxgEP4Vg5M4YCfyEekLPThqKcGQzcQDx3fo0zEmq4pq9/6ayZEhjrYJTKJKKebAnw8cSxKF8mUj7g9HOJY2lpbcBFxANxPRYnqe+WJbaRFfDkWjVU09e/VNNECQxYB6MERgH/Iv6Ovpk4FuXThsTZQwuB9yeOpWWdRLmOwWV89dcqxJlBS4hWlFIDtFz9CzRfAmMdjBppEHAT8Tf0y8SxKN92I7YfzgY2ThxLy9mXGHC+iAcTauAmA28Bc4HNE8eiltBy9S/QfAmMdTBqpB8Qfz83EVugpYE4lPKxEmMSx9Iy1gFmEdnjdoljUfP4DyIpfo4odpPqqOXqX6D5EhjrYNQoHyLen57F9yfVzv9QPuiyPXEsTW8kcX5HAfh84ljUfEpnxNyMvdJVNy1Z/wJNl8CAdTBqgPWIrT7v0FR/O8qAwcRZXQXgxMSxNL1S0f75qQNRUxoE3EY8x76VOBY1rZasf4HmTGCsg1E9jaB8TMSnEsei5jQR+DewGNg1cSxN6yDij/hfwLDEsah5TQBeJjp0bJM4FjWllqx/geZMYKyDUT2dTfzNnJc6EDW1nYkE5gXibEXV0CrATGAesGniWNT83k/sN36KaFsp1VBL1r9AcyYw1sGoXvYi3oeexE6rqr/S6/NVqQNpJu3EaekF4KjEsfw/9u48XI6qzOP4t7MnZCUbISzBsAWURWAA2YICgrIMgrjgiAoziLghOoorggIu4L4CyiA6CDpAEAURSEBBQTCGVZAYwpKdbJA9OfPHW0X1vel7e6vqU3Xq93mefvouVdXv7dvVfd465z1HyuPbaLiipK609S8QZAIDqoORDIwjGQmwv+dYpBz6AX/G3qPf6zmWYJyDPaG3odXSpXMGkiwYdoLnWCQYpa1/gXATGNXBSNr+DztXzvMdiJTKZGAFNmnEtp5jKbztsfU5lmPDyEQ6aU9suu7ngRGeY5EglLb+BcJNYFQHI2l6K3ae3I9mw5TO+zD2+rvFdyBFdyv2RP6X70CktOLFw77jOxAJQWnrXyDcBEZ1MJKWEdgC3evRosriRx/gj9h79SmeYymsd2NP4Aw0dEz8GYIV829EC6dKW4Krf+kH3IBN83pwA9tfgr2n75tlUH6oDkZScSV2jlzoOxAptd2BtVgdlmYla9IIYAG2cNPOnmMROQL7UPkb6tKXlgVX/zIVOy8c8MM62/bFVnt2wHsyjcoL1cFI2w7EZh17HKvBFPHpi9j79Xd9B1I08YroX/IdiEjkejScUdoSXP3LMGxylSeA/Wr8fjI2jn8mdiVvFfASdmHqKeBB7LwKgOpgpC0V4C/YZ8wbPcciArbe4r+ADcBrPMdSGJOx9V7mozU4JD+2BV4GFgIjPccihVTq+pfAqQ5G2nIalrxM8x2ISJVTsNflHb4DKYqbsCfsNN+BiHTzJey1+XXfgUjRBFf/IptRHYy0ZCg20+VaNGRe8qUC3I2Wk2jIwdgT9QA2E4JInsQfNGuA7TzHIoUSXP2LbEZ1MNKSz2Dtnst8ByJSw2uxSYweQzXAvboTO5GP8B2ISA8+gL1Gf+Q7ECmS4OpfZDOqg5GmjQCWYOvdjfUci0hPrsXaPaf6DiSv3oA9QX/0HYhIL/oDs7EFLl/lORYpDNW/hE91MNK0C7B2zwW+AxHpxc7Y2kRPYdPnSzfxOLupnuMQqecM7LX6U9+BSBGo/qU8VAcjDRsNLAeWAqM8xyJSz9VYu+e9vgPJm9djT8ztvgMRaUB/4J/YFYlJfkOR/FP9S3moDkYaFq+z8VnfgYg0YEeSXhjVwlT5LXYiH+47EJEGnYm9ZjUsSOpQ/Ut5qA5GGjIYWASsQNPyS3Fcg7V7TvIdSF7sgs1w8KDvQESaMBCYhy3MN9pzLJJrqn8pD9XBSEPej2Yek+LZA9iELVAswBXYifxO34GINOkL2Gv3U74DkbxS/Uv5qA5GelXBpqTdgCaCkeL5A9bueZ3vQHwbA6wG5mJ1BSJFMgZ4GXgOzcwhNan+pXxUByO9ejPWALzWdyAiLTgae/1eV2/D0BtF7wYGAT/AioNEimQx9iH0PuxD6Sa/4UgOTQVmQWVxBx5rC2BA9PVwkkLLeIajPti6E2AXjIb2cJx6MyJVH6cn/YBh2PDgFXW2XY8NxezNWmBVjZ87YFn09QZsPY3u26+Kvif6/YY6j9Wu6cAlVgdTWVlvYymd/4ruv+M1CpHW3AY8Cfw7MB5Y0NOGoScwp2MfJv/jOxCRFl2BJTBnoARGzDAsQRgJi4+DFY8Bh2LJxcjod8OwQt5BWLLRH0sKBgJDsOSiP/WTj0HRcRpRnSiswXq/u6tu7PfkJepfcFqOjZWG5O/sTXXy1ZPqpKzaCOz5qdBcQXQcY70kKP57l1b9fjX2HK6Ifrc82X74Kli6Gu45Cbgj2r+R51XCNwF4E/AEcK/nWERa4YCfAJdgnRBf62nDSqci8uBQYAZwI6Dx4VJkfwd2x8Yzz/UcizQvTiKGYQ3g4VXfD4u+HtXt+2FYw3kEllAMZ7NkogK8AHwAuKG6ERwnEnGjdnn0sxUkicXK6Gdxw7qn5ONlbFFVov03Rl8vje4b6QEJUdwDBElSSHQ/MPp6GMlFwlpJUJwoxvvU+l9XJasMwBKxLWDaAHgUOK97XN3/1yujr5dF9yu73Vf/PP7ZcromiFIcn8MWrTwHzWApxTUeeBZr7+yEJTWbCTmB+RnwLmzozW89xyLSjo9gH0afBy70HEsZDQO27OE2giT5qE5KRmCNzrgBWm05XRuSK7GEoHtDMm5grqfmVfk7x8Ph9wHjOjSETHLDnQvr3w4D3kTvvW3xa3IUmyfNw0leo3HSVO1lek524tfni91uS6u+Vo9QZ/UBZmONv22AJX7DEWnLr4G3YMOkZ9TaINQEZgg2bm4FsB3JVUORIhqNTan8NDDFcyxFVaFr4jGK2glJrZ9XD7VdRtcGW7zSdfcr23GDr3uisozUuLOAM6GyV3rHlGJw+wB/BrZMsQ4mHno4EkuAuifl3ZOgEWx+rlQPN3yZzZOaerel1K9XktoOAe7Gip/f5jkWkXYdC9wMXE5S19VFqDUwb8a65q9AyYsU3xJsasFjsHnSZ/kNJxf6AmOj2zjsqmP8/VZVX4/FEsDqwvGN9Hzl+Okefh7f8jSs5jCsoFvKZybW0D8IuDWlY64neZ23ajD1Lw5MAvbp9vPhVcdYVxXHImA+sDD6eiF2cXJR1e+KPITxNVj8z6RwrFOi+/9N4Vgivt2GvQecCJxNjbrIUBOY+ESuOw2bSEH8Ektg3ka4CcxwuiYf47HkJE5S4t+Nie7jHuTlWENmETZz23ysgRd/vwR7I1yCJSTLO/LXZMpVsATmA74jER8qG8Hdg70G0kpg0rA6ur3Q5H79qJ30VF+QmEzXixXxkLe12Lm+ILotjr6fR5LoLCBJgvI0tO0qYE9stskvYcX3reiDDbdZiTX8RIpuPVbD/j7g9dR4XYc4hGwI9kb1IrA9PRT/iBTMcOxD+AXsg7wo+mCNjgnAxOh+66rbBJIGSVz8vIauV14XR/fdk5S4QRIXmZeIm4JVcav+pbTcucDJUDnQdySeDGPzCx7jSS5wTKDrBY8+0X4rSN4/XsASneer7udja291YorqF7EeqHXYEL6bgfOBvzV5nMOBO4FrgP9IMT4Rn47CEpefYLMKdxFiAnMS8CvgMuBcz7GIpOkm4Hjsil0eemHiq6Pb9HA/EWtQ9MOGXsUJWHybF93HV03jJEVj4OtS/YtkUgcTqj50TWYmYBdW4osoE6vu4+Fs8SLCC7AZkRZE38+vun+e2usHNWJQtG91O2xDFOuD2BRzdzR4rG8DHwJOAKa1GI9I3vTD2gkVrC2xsfsvQ3NMdK+TWELzGyyBOYZsE5gBwLbRbTtqf8hvRdJjEice8Yf6TJIP9/iq5kKyX+CvTFT/IlnUwYQqvoDS46J4VYbQ9WLMeOy9cCtg36qfx1NnLyd5r6u+QPM8yVSw82s8ziQ2v4gct8lei115ngl8EXvv7200yTFYz/UfGvj7RIpiA/be9i7g34D7qn8ZYg/MXGx2lDHUXwxNpEi2wV7fM7AhA60aS5KgbB/dqr/fCntveAkrLu1tmMV8SjmEyydXtQBM5Qbf0YhPbhrwKFQ2WxBGMjecrhd1ug+P3Sa674/V3cwlSWieifb/IJtPs15tU3T7J3Ax8HM2n5joVdjkI7eSXMAVCcU7sdf9BcAXqn8RWg/MHlgj7P9Q8iLheQ6rezgIS9JrFaMPwJL3CdgHW3yLe1F2IhkisRRbN2A2lpg82O37eaiGLI92xa4K3+M7EPFuBnCy7yBKKp4u/fE6241i8/fh3YD9qd8G6xPddsFmVf0ythbYT0naOG+O7tULJyH6PZbEH0O3BCY0/401uP7TdyAiGfkq9ho/Mfr+aOB6bCz8C9HvHPbB+ghwC/AD4NNYcechWC9Lb1f9JNfcWeBm+o5C8sDtA249uGG+I5GmXYz1zLgWbquAt0bHuSX62S4djF2kk/6M9TyOrf5haD0wh0X3Ggcqobod+AS2Ou0N2JSlT2P1EM9gwxPmkuqCiZIzqn+RmOpgimtHrMe8ERuwBtzA6OuXsItQfYGDsff8f2QQo0ge3I71WB4K/NpzLJmoYOs8PO87EJEMDcWGDvzVdyDig6uAmwfuxPrbSjm4aeAu9h2FNO3v1O5dWYfNgLYJu0D1EDbL2LuB3bGkJbZXtM/POxa1SOcdjb3OL/MdSFZejf2Bv/QdiEjG/oYlMUN9ByKd5qaA2wRujO9IJC/cueDuq7+d5MyLWG/Kaqztshq7MPVNbNalKSRr1/Tk7Gjfs7MLU8S74di58hffgWTlTOxE/ojvQEQy9l3stf4G34FIp6n+RbpTHUxB/QT4Bo0nK7X8Avss0HpQErpZWO/kkHobFtFPsRP533wHIpKxd2Cv9c/6DkQ6zV0L7pu+o5A8cX3BLQV3tO9IpONmAyvpOqxMJEQ/wNo9h8Y/aCXjz6u9sC6mv/sORCRjD0b3e3qNQjrMVbAC/hm+I5E8qWzEptQ+rN6WEpQR2GKYf2PztWFEQhO3e17pbQwlgemHdcH+A5uWUCRkT2PTaO7hOxDpKK3/Ij2Zgc1MKOWxBzZ50cO+AxHpgPh1/pr4B6EkMLti0wvO8h2ISAdsxBa03BHYwnMs0jlTgVlQWew7EMmd6cC+qoMplfgClhIYKYNHsJn5gktgyn4iT8Z6nl7GppJe1O22LPr9pd32G4PNfLIKmxGl+35LgTXA/2T+F0izHsbO3918ByIdo/VfpCfV68FIOcQNuTJeuN0W+ChwIzbN9GLs9f8ocDmwb419TsAWD83KucD/YT3kTwLLgQ+1ecxTgGuBu7C/bTFwTZvHLKqXsZqvVxPlLqEsZLlrdP+I1yj8eRrYChte8lqSOeHXAv8F/AlbmX1Jt/0WAxOAcdgqvtOin28EPogtCLoi2k7yJX6tTwEe8BmIdMIr9S8f8B2J5FFlI7i4DkYLWpbDlOi+TO2eY7DJaw4E5mOzsP0c+BcwD7soewjwM+AW4FNYbfT+wP8C78swtlXYOoS7AjtFP2v3f7MBeBZrp8UXKx9t85hF9gg28mR77H8ehJ9hsxPoarRdgYsXxLq+if12rdrv9gziqqeCXb14Bhjl4fGL5gTsf/UF34FIJ2j9F6lH68GUzFxspEQZ7IldUHVYA/5Yep95bQBwFfBDYAdgATb8aFymUZoPYXGuwkob0vBakvbZfikds4guxZ6Dw30HkqY/Yn+U6gHg0yQv9NOb2O+sqv3+O4O46jm46vEnenj8otkDe65+6jsQ6QSt/yL1aD2YEumPXZ2/33cgHfCf2GiS9VjbpH+D+/UBHsOSPEfnZqi9Nnq836d4zDgpWkq5p8z+IPY8vBfCqYGZhGXYL3uOIw+qM9M/NLHfkVVf35VSLM2IF2WcjXXDSu/i7tNJPoOQjlH9i9SjOpjy2B5ryAYzjKaGfsCPgB9jCxgeC3wVS2QasQk4HxtWBs21h1pVAV6fwePF7aO7KPeU2XOi+0keY0jVQOwf+mffgeTAACyJc1gRWaP6Ypm9wwrPfGT406PH/7GHxy6qJSQntATLVcDNA3ei70gk79w0cFkWKks+HIF9Xn7FdyAZugr7G9fSelJewep4HVY/k7U9SUaSvDalY/bFJmJyqAZyd+x5uBrC6IGZgP0dz/oOJAf2B4ZEXzeT/e8LjIy+vofOZ/iDgAOir+/o8GMX2Vxga+xNWsKl9V+kUVoPphziYdahtnsuAk6Lvj4bm4ioFQ74J9Zr04n3z7in5EWsRzQN+2CLloLaR/HrfSKEkcDE3YNlKWbrTVGHj70O60lzwJ0eHr+oFmPjgYf7DkQyNRWt/yKNmY7WgymDuN0T4nvCW4Dzoq9vAa5o83j/wkbovNTmcRoRJzB3YkPY0jzmc9hi7WW2AuuRGwNhJTDdpwguoziB2URzicgRVV/7SGDiuB9GiWgz4te8ZqYKm+pfpFGqgymH0dF9aAnMUOBb0dfrgA+ncMy+dKb+pR9waPR1mj0lcQJT9t6X2ItEr/8Q1oGJT+SyJzCDsLnRAf6K1bQ0Youq/ZbRerfnGOBU4ERsSFMFy5TvwQrv1kTbjcKGPg2tcYx4Zq3YveiDuDfxa340thaQBEfrv0gztB5MSYR64fZ8YJvo6xuwSX3a9QS2REOjGm3LdLc/Sbum2YTpzcCZ2Hp+K7BG+n9ja9vEbaBOJGFFsIRknZ3C+zDW6P0P34F4djhJ8diXm9jvmKr9bmrhcftgU9stw6YPrH5hbY8lU49gC2XGRmCN7tHAZKzHyGFjXkdX3UJIsLP0RTpXnCheaP0XaZbWgymBX2Pv/dv5DiRFWwArSdojR/S+eepaactU+zwW9zNNPOZWwO+AhdjQudgeWN3PsSTPx9ZNHDdkdxHQsinxuif/7jsQzy4keaG3evtok485ELiO3teO2RkroLuf2kMWj4v230TzQ6FGYGNNy7qA6cex5+4U34FIVrT+izRL68GUwG3Ye39Iiz6/i6QtMpvOTk6TRltmRrT/lQ0+Zrya/BLgNTV+/26sJyZevLOW/sAZwI3Ar7Bk6G7gkwTSwK9hGvacbOU7kDTEV6Hf5DsQz+LFPNdgmfroBm8Pkrxp7NnE4/UBfhPt98M62/452u49NX53Gc0tMjUeOAq72vFCtO8+De4bmrj38V2+A5GsuGvBfdN3FFIkri+4peCO9h2JZGY6AV2FjvyOpC3Syemh02jLDMGGmTngHQ085nBsdjQHvLOXbeJjfqvG70diPXHH03Xpi0nYAp7PEdB6KVWuJ6Dex4uxP+YN9TYM2BCs4M1h2Xcr+y2huUkdvhTtNwe7etGbX9DzqvF/i373jQYe8zDs77sSmzntq5Q7gTkT+/tP9x2IZEHrv0irtB5M4O7F3vtDGmb9HEkCc1IHHzeNtswbSWIf38BjXhNt+8dethkBbIi2O77G738CbNvDvgdE+81qIJai+Tn2t+0Uwot/QHS/zmsUfh2MdSVCcwnMAVX7zaDxaf92wboowU6itXW2j8dudu8m3RIb6wmNzX42g2SWD0hWvC2r+DVf701Xiknrv0irZgAn+w5CMjMA+7ze4DuQlAzA1vSLPdChx02rLRNfQH8YWFDnGAdgkwSALdbZk6lYz8pG7HyuNgTr6dkzOt76br//KzYa5zVYLc9TdWIqkvh/NDCEaZTjxlv3f2CZVK//0kwCc1jV181Mn3wBduVnE72fgLG4RqX7mNapWK/PJpqLW8wrJ7LXKCQrU9H6L9Ka6Wg9mJANIKw2z9YkI0DWYDOVtms09YcZpdWWaWaq489E9+vpfXa0+JgPAMu7/W4U9rm/F7VrhzeQzETbSI9QkcTtnkEhJDCu/ibBixOYjUAzs89U92Y0msAMI+nOvIP6bzRjgLHR191n54jjfgib+UOaE7+J6hwIk9Z/kVZpPRgpkpVVX7+Y0jG/COzYy+/TasuMxhIJqD/V8XBsuFm8bW9/6xuqtuvueewC11RsquXu+mGJSzP1xUURt3s2hZDAxMNoBvS6VbiGkdSAPETXN4LeDMC6HsEWw+pplovuDsXWnIHG5iWvTpJu7/a7OIHxsXhmCOLXfL1ubymcV9Z/6T50QKQBlY3Y0MPD6m0phbSOsNo8L5Ksr5LGxATDgNfRe9sirbbM4Vjv0QbqjyQ5nGTY/p29bLcVSW9PT7HdTc/Diw+JYppG423CoohHnKwNIYGJG28hnczNOISkkK+ZYVj7kZy802n8Kv72VV83kvTExXhr6dpdOg7YPfpaCUxr4hO5zPVfoVL9i7RrBnaFVsKzFrsS3b/ehgXhSHqbR0S3dpwPfJve2zVptWXinpK/UD9ZmFT19cO9bBcfczXNjaqJfRobdvahFvbNu1faPSEkMGXvgWm1/qWV4WPd1TvpxwInRF9/m67FbVOj+w3UbqR9BtihxbjK4pUrEV6jkCxMRfUv0p7pqA4mVCG2e35Z9fUbe9yqvkOwoWNXNbFPO22ZWkO9tgA+UuM4I6u+fqSXx4uPeTddL1B+uk6cfbBZ1SYBBwLP1tm+iF4ZeRJCAlP2QuY4gak3JV93rRbwV590L9TZ9hPYiTwb+HK33+0b3T+AjdWuNh54M7bIk/RMQ8jCpfoXaZfqYMIVYrvnZ8Dj0dcfa/EYr8YK8xtZWiCNtswEbJYv6Dok7ARqLzL6dHS/HKtjqWUAEK/hVH3MHbDkrLtXAV/DFuJ8HHgZm33s8RrbhiComYfPpudFEkM3Eivcb7ZQqy+wItpvfpOPORw7+RzJiVvLgdib7EqSqZKrxSsJf73G735CY1Mkx2sAlXUdmHj++qN8ByJp0vovkhatBxOoeNX40EYp7Ic1wB3J9MaNehPWC7Jlg9un0ZY5jGT9l+qE5bfUXhh8r2jb3iYt+mjVMY+p+vmngff3sl8FG3p8DjaN8jG9bFtkd2PPzeAQemCWRPejvUbhR1w8BvCnJvbbFytyg+anL14BfDb6+pQettkdGyO6NIqx1mJK8dCYRd1+Hq8q31uBm5h4+sQlvW4lRaP6F0mL6mDCFGq75wGs1mQFcAlwEfXrfPYFboz2O5HGZzFLoy0Tt19WkUxbPA77v9S6qDwTm/FsBNZz0t1pWMIUH+u5qt+dhCWuPXHAE9ii4J/Ekqiv9LJ9UY3Bnu/VvgNJw5HYP+4i34F0yCBshooDsCw7ztR/gWX840iSmmpDgcnYzBx3V+13C7A31mDq22AMfYAfY1ckjqv6+WCsR+wl7KTfppdjHIzNvX5F9P0A4OPATTTeLV72Hpjrsb9/kuc4JFXuLHAzfUchIXD7gFuvOpjgXIi997dTK5JnO2IzfTlsyuILsAb87tjq80dgNSa/Be6n9eeh3bZMhWQSpDHR9/8TxdeTKdjIl1+TTMA0HGvPfDOK6ZLomHFMHyZJthoVP38n1NuwYBaw+ZIchbU39k/6ke9AOuDVWNH7KqwLcmnVbRnW9boROK/bfttG+63uZb8NwFebjOdY7CS8E7uq8GfgByTTMzey//3YG8B9wAdpPIkCJTB3Yn//UN+BSJrcteC+6TsKCYHrC24puKPrbysFEg8zOrXehgW3H9YueQBruK7DGv8PYsPPa9WEtKKdtsxo4KfYMhb301j9zXhskoF7gd8DN2N1v7E+2Do2j0W//xKbL55ZT1xe0cosZnlVwV4DD/kOJC3bYf+k3lY0lTCVPYGZRTJ3vgRB9S+SNtXBBOhd2GdfrZmuJHyHYjO1btXLNm/GXiNrCGe2upHY33Q7JN1XRTYP63XY1ncgIh22HV3HyErxqf5F0jYDONl3EJKqeAYrtXvK6VfY1M7j6DmJ7Vt1H0K9OyRr9zwLYfxR67GTObTZOER6MworBJzjOQ5J11S0/oukazpaDyY08RIDaveU0wvYBAK39LLN5Oj+YcIZqTEpup8DYSQwYCfzaKwQSqQM4g8urZUTFq3/ImnTejDheQ4L7ncQAAAgAElEQVSrW1UCU04zsGmjf9/LNvtF9zdlH07HxK/3ORBOAjMnup/kMQbpvEq3+zLpciJLCFwFS2Bm+I5EQlLZiA1JPKzellIYG7AkZpLnOMSPi7BZyWotlgmwGzY19MNYrXAoJkX3/4JwEpjZ0f3kXreSkPTFplQEm52tbOI55NUDEw7Vv0hWtB5MeGZjDdjQ1oKR+hZgM5P9Cqtvqy7S3wNbF+ch4C0EsmJ9JG7jz/EZRNpOxmYm+ILvQCRTk7FpCmdiUymuwoZGrAaewqZWvN5bdJ31M+w1X2u1Xykkrf8iWdF6MAH6NvYZMNVzHOLPQGz9vF8B07Dhx7/BptkOYZKu7p7Blv4IatTNztiJ/CvfgYh0yExsAotGF/2U3NP6L5IVrQcToP/E2j0f9h2ISAeMwBY/vzv+QShDyP6JLca4h+9ARDqgHzbc6AlgredYJBWqf5EsqQ4mQLOi+9d4jUKkM/bAel4ejn8QSgKzCXgUG2K0hedYRLK2K9bzMqvehlIYqn+RrKkOJiyPYG0fJTBSBvHrPLgEBmxITR9gb9+BiGRsn+j+716jkDRNReu/SLamo/VgQvIyVvu5B+GstC7Sk83aPSElMPdF96/zGoVI9g6O7v/U5H4VYEfgbcBXo/3fn2Jc0jqt/yJZ03ow4bkPGIwu3Er4DsaGzD/kO5As7IQVtIW0aI9ILY9hK+sO6mWb7snKH7HGi8Nmb1uDrSVwYaaRSgNcBdw8cCf6jkRC56aBC2ldiLI7A3tP/5jvQEQyNIZuBfwhmgcsIrAp1kSqjAI2YglJta2B44DzgduB5dgH2xrsqoWrcXsZ+wAUr9wUcJvAjfEdiYTOnQvuvvrbSUFMwd7Lf+07EJEMnYC9zi+q/mFo80T/Gfh3YBdshiaR0ByMDf28N7q/H1sLph82rXKFrud1b9Ms9yewBaEKaiqqf5HOmA5cYnUwlZW+g5GGTAC2A7aNbttH378Da+cswYbOV7BGnkho4mGv91b/MLQE5i4sgTkKJTASpiOj+7uwLtW/YrNzbMISkmYogckH1b9Ip1TXwdzqORaBIVhCsi1JkjKJJFnZFrsItR54HngWW8zvcaxwfy02u9xbsGJ+TewiIToKG/LeZeRJaEOtdgb+AfwOeJPnWESy8BSwDTAaq2UBmzr8DOAzwHCaW9xyKTA7us0DXqj6fnb0e8mMq2DP+QegcoPvaKQM3DTgUaic5zuSEhgFvAob4jsh+rr6+x2wdtgaur73dn8vnos14Gr5T+DHwKeAr2T0d4j4MgFL3v8IHFr9i9B6YJ4EnsaGZAwGVnuNRiRdO2GF+beSJC9gtSzfAn4AvB24ANgKu0LX20WKZdiHX/WwhAOj77eKtlkBPEfygVp9m4e9scxDC2q2Suu/SKfNAE72HUTBjcKSkDgRmVh1vxV2kSl+D16HvYfOjW7/wOoUn42+fwZ7D2/V77ChY0ejBEbC8yasHfO77r8ILYEBa9ydjWVqt3mORSRNR0f3m53IkXXA1cA1wJuxD7OdsFqZWlOmz6bn4s9BWCKzTXSbgH1Yvwqrw4k/uOOZ0BYD87EP6vh+AfYhvaDq+/V1/8pymYrqX6SzpqM6mJ4MJUk+4vvq5CR+Hxwcbf8iXS/kPIE9v89jF3nmRj/PsjblOWwh74OAEdgELiKh6LHdE2IC81ssgTkBJTASluOj+3pj1zcBNwO/AY4FvoiNjwboW7Xdk70cYw02XO2pOo81mtpXHl8NvCH6fjx2JdJhScx8kg/8F7CZAxdEt8XR94vqPG4oVP8inVa2OpiB2DSs47D3prHR91tj703VCcvQaJ+VJBdjnscSkfvYvCd6Taf+iDp+i73nHg380nMsImkZhNW/zKNGfVdoNTBgDaX52HjRrel53KhIkYzFPjAfBfZqYf8jsCmWD8SmYQbroflcGsE1YDy1r2pujf1t46Jt4gbEBpJkZgGwkCSxmV/19cLo9y916O9IkepfxJdC18FUsPeMOBHZCnv/iH+2VbffjYj2cyTvG4uwRlFPvcXtDOny4d+AvwD/B5zkORaRtJyIvaa/A3y4+y9D7IFZB0wDTsOubt7hNxyRVLwVO1+va3H/P0S3A7FE5khsCFmnxD0s9WbJGUzXRkh1cjMWmyK9+ncDov1WUzu5WYgN86h18123o/oX8SVPdTDDgS2rbqOr7uPzfDxJkjKGpCf5Jeycr77A8Qi1L3YswnqnQ3Q/Vv/7Juz5XOE3HJFUvC26r9nuCbEHBuwkvgWbmeNMz7GIpGEGVte1M/WHdTViMnal0Xcjvl2jsMbNGKxxM4GkkRMnPdWNoyFV+76MzbJWK7lZ0sPPX6TrBAptcGcBZ0KllR41kTa4fbB107ZMsQ5mFEnyMYqu511vt+oLqdXn4xKSxKO6F3YhlpgsRhP1VLsE+CRwKvALz7GItGsIdt6vxEZubHbxIdQEpj/2BrcJG6qyzm84Im2ZiI3B/jvwWs+xFN0gGm9YVd+GVR1jDUkjayn2BrsSK55djl39XFl1q95mRbKtuxaYD5WPZvbXitTk+mIJwDugcgf2+h6JXb0fFt2GR7eRVT+Lb6Oi31UnKnF7YhM9J/+1btVJS6g9JJ3wWuBBrP7x+DrbiuTdKVg917eAmp+RIQ4hA5vp6Drg/Vgx//V+wxFpy3uwWcSu8RxHCOL1Fl5ocr/+1E5sRpE06kZgU1HHjb/qxt4wurzfVoAFDj76IlakGCc21cnOSmyIzErsPW0Z1mO2Kvr5+mj79dH3qyh+j5r0bjj2WhyBFacPwerG+mOvs/7R90OwZD1OSrolJpWR8LvBMOtmNm8HrCJ5Pa7AXnfVr8mF0c9WkCQfS0l6LZdl8HdLfQ8BjwHHYLWFzb7HieTJ6dF9j+2eUHtgAPbBVin/PfBGz7GItKqCDRmLpzQuy+xcIRrMKw3Iz+4FF14HB5wGf3HJz1+52h03OIfScwO1pwVLl2NJzQosYVtN70nQOpKi5Xh7op/FvdcrSCZ/iBc33Ugy1r6nY4RiBHYRoYL9j8Aa/nHPXJxMgP2f4+nF4/9dT8fo7X87IvrZ8G7H7K76fxv/H17G/g9xr2B1AhIlHnccBa89EEa9M/p5nJRs3OwRpCg+BlwKfBb4sudYRFq1HVaj+yiwZ08bhZzAgHWn7o2thfG051hEWnEkloT/ElukUoKQWv3LSKyRO4ykkdvMVfqBWEH08Oh4A4Atoq8Hkax3sQXJhAnD6Toddz31GsXVyU+rx6jWW2MfuiYQrR6ju9UkU+rGiSFY8rAJmwEr7pnYgCUM0HPysY7avWu1ktMWZVIHI36NJlmDZkc0JE+K6QJshtQPAt/raaNQh5DFLsdWJ38f8BnPsYi0Iu5GvdJrFJK2tNZ/iRvFvnrmRkX31UlQnCDFqnsqetJtiN1mGjkGWIOt3kJ+1T1LjRyjp+Qj7skqqrKtB1MGS4AbsItdR6K18KR4+gLvxS7O/NxzLF7F0wkuJLmSKFIU22INrX9iQ08kCK4Cbh64E31HImXnpoG72HcUkqrDsaT7Ft+BiLTgFOz1e5XnOHLhG9iT8X7fgYg06evYa/ds34FImtwUcJvAjfEdiZSdOxfcfb6jkNQ9iPUk7u47EJEm/Rlr92h5AWB7bMzwk+gqthTHcGzYyhKSmgQJgjsL3EzfUYhYHYxbD66R4XlSHKdijcArfAci0oTDsNethrRWuRZ7Uk7wHYhIgz6BvWa/6DsQSZu7Ftw3fUchYuvBuKXgjvYdiaSqP/AMNtHDBM+xiDRqGtbuOcJ3IHmyH/ak/IXwZ16T4huMzSKzGltNXoKh+hfJG9XBBOrjWLvna74DEWnAnthMk3/zHUge/Q47mY/zHYhIHedir9Vv+A5E0qb6F8kb1cEEagtgAXYhbKLnWETquQlr9+jiXg37YEVts1AtjORX/KGzCltNWYKi+hfJG9XBBCy+GPYt34GI9CJunz+IRkn16EbsZD7JdyAiPTgPe41+xXcgkgXVv0jeqA4mYIOA57BamG09xyLSk3iE1LG+A8mzPbAxdo9jRW4ieTIaeBFbu0hDjIKj+hfJK9XBBOxsrHH4E9+BiNQwFXt9/tlzHIVwFfZkfdRzHCLdfR97bX7GdyCSBdW/SF6pDiZgA4AnsIu3+3mORaRaX2Am1u45zHMshTAeWA4sRVe5JT92w9YrmgsM8RyLZEL1L5JXqoMJ3LFYI/FeVGMg+XEm9rr8pe9AiuTT2JP2fd+BiERuw16TJ/sORLKi+hfJK9XBlEBcZ/A234GIYIt1z8MmLJrkN5RiGQTMBjYA+3qOReQU7INluuc4JDOqf5G8Ux1M4OJe/mexxqOIT9/F2j0X+g6kiI7Bnry/o4J+8WcE8DywFtjdcyySGdW/SN6pDqYEvoq1e77nOxAptf2xmqx/YUtHSAv+FzuZz/MdiJTWT7DX4Pme45BMqf6lA0YAd2JXmqVpqoMpgcHAU1jj8WDPsUg5DQAewdo9R3qOpdDGAYuxlWp39hyLlM/rscWbHgMGeo5FMqX6l4yMB44CPg+8gH0o7uM1osJSHUxJvAH73Hkcfe5I530Be5++ynMcQTiNZA7qfp5jkfIYCcxBV8JKQPUvGTkMuBu4EruSFw+PUQLTMtXBlMSV2Llyme9ApFT2BdYBC7B17yQF12En8wW+A5HS+AX2mvuq70Aka6p/6ZCLUQLTJtXBlMQIrP5gE/Amz7FIOWyBrUfkgOM9xxKUMdjwg43YqqAiWXo3dhI/jM2IJ0FT/UuHKIFpm+pgSuQgbCbW+dhQTJEsXYGWL8nMUdjViDnAKL+hSMAmYwupvgxM8RyLdITqXzpECUzbVAdTMhdi58xNaIFLyc7J2OvscbRQd2YuxZ7k3wB9PMci4RkCzMReY+/3HIt0hOpfOkgJTCpUB1Mi/YD7sPPmk55jkTDtgl20XQPs7TmWoPUHZmAn8xc8xyLhuQp7bf3ccxzSMap/6SAlMKlQHUzJbIsVVW8E3ug5FgnLUJIpk//TcyylMB54DjuZ3+w5FgnHh0gWTlUXammo/qWDlMCkQnUwJXQ4sB5YAuzgORYJQwX4JfaefLXnWErlIGyqtxdRnYK073CS19Nkz7FIR6n+pYOUwKRCdTAl9Qns/HkQrY4u7fs09np6AE1W1HHvx5782WiGDmndbsBS7OqWGgSlovqXDlMCkxrVwZRQhWR6/5uBvn7DkQJ7BzYp1jxgO8+xlNbXSDJIXZGQZo0FnsJeQ2d7jkU6TvUvHaYEJjWqgympAcCd2Hn0I8+xSDEdDKwGVgEHeI6l1PoAv8JO5v9DVySkcUOB+7HXzlc8xyJeqP6lw5TApEZ1MCU2GvgHdi6d6zkWKZZdgcXY+kJarDIHBgP3YifzT9Fc6VLfIOAO7DXzSzQld0mp/qXDlMCkRnUwJbcjsBAbBnS651ikGCYBz2LvwR/yG4pUG4kVtjnge55jkXzrj40fdsDtqHitpFT/4oESmFSpDqbk9sQmntkIvNNzLJJv44EnsPffL3qORWqo/gdpSJDU0pdk2sC70XTJJab6Fw+UwKRKdTDC64CXsFk0T/Aci+TTWOBR7L33Ms+xSC8mYrOSOeA7aDiZJPphc5074G/AKL/hiF+qf/FACUyqVAcjALweK8peC5zsORbJl/HY2nYqsSiIHUiSmO+j+gaxYWLxsLEHsCJIKTXVv3igBCZVqoORVxyNzSq1HniX51gkH7YHnsTec69EbeHC2Ap4GPvH/Ry7+i7ltAXwe5JhY8P9hiP+qf4lJf2AG4DHsak567kEOw/3zTKoclEdjLziEGA5Vtj/Yc+xiF/VF/K/h5KXwhkHzMT+gTehdWLKaDzJVMm3opoXAVT/kpqp2LnlgB/W2bYvSS/oezKNqlRUByNd7I8V9m8CPo+GDJXR/sAC7L32S55jkTaMAv5IMnRoK7/hSAftBvyLZKrkgbU3cweBO6VzYYl/qn9JyTDgNmzylP1q/H4ydgFhJjAfG+LyEjZe/yls5sjrOxJpsFQHUz6uP7jzwfV0UXZP4HmSuocBnYpMvDsReBlLYD/uORZJwUBsGJkDngP28huOdMBBwCLsf/4teu0+de8Ctxbc2zoTmvin+hcJhepgysX1B3cDuKfBTexlw62Bh7DPwDuwpSYkbB/BptReA7zDcyySogrJ+OvlaLrBkH0Qm41lPfD+xnZxbwe3xpIZCZvqXyQ0qoMph1eSlzngJjWww3Csh9RhdWpTMgxO/BkEXIH9nxdhU2tLgM7AstNNwJexcdkShiEk0yS/iM3K0gQlMeWg+hcJjepgwtd08hLrD/wA+1xcgaZZDs32WHmEw9Z62dFvOJK1fwPmYv/w29CUuiGYTDJhw9+j71ugJCZ8qn+R0KgOJmwtJy/V3ovVnm0CvopmZg3BkSRD5a8DhvoNRzplDMnUuvOBY/yGI214K9bj4oBf0PZsc0piwqb6FwmN6mDClUryEtuLZGrdv6Cr9UXVDzgf2BDdPolmmyudftgUcxuwwqev0eNMVZJDI7CExWGzG52V3qGVxIRJ9S8SKtXBhCfV5CU2GltWIh5SdlpKx5XO2AX4K/b/mwsc5jcc8e1QYA72gvgbsIfXaKQRryf5n83EpkxOmZKY8Kj+RUKlOpiwZJK8VDsTm243XmZgbAaPIempAB/ApqGPh4yN8hqR5MZw4GfYC2M9NvWuFr7Mn5HAj7BxvJuw/1OGvWZKYsKi+hcJlepgwpF58hLbFVuLKZ745r/QUKQ82hGbCtthSedH/IYjeXUS8AL2QnkC652RfHgrMI/kf3NIZx5WSUw4VP8ioVIdTBg6lrzE+gOfxQr8HXAr0InHlfr6A5/DZs51wO/Q/0bqGIFd2d+IXeW/Dr1ofNoFuIWkd+wSbN7zDlISU3yqf5HQqQ6m2DqevFSbTHKVfxX2OavePH+OAB6ha++YSMOmkryAXgI+Dwz2GVDJjAa+iyUtDrgbeI2/cJTEFJvqXyR0qoMpLq/JS6yCNZTjqXnnAu9Ew8o6aQrW0+Kwi+g/wWbNFWlaP2xl9yXYC+oZ4HSsa0+ysQU2LWD8nM8BTiEXb6JKYopL9S8SOtXBFFMukpdqo4BvAuuwz+D7gDd4jSh822ALjsbP+Z+A/bxGJMHYEvg2yYvrn8C7gb4+gwrMYOAcYAFJr9fnyF2vl5KYYlL9i4ROdTDFk7vkpdquwG+xz2MHTKdjtaelsRVWshDXIKnXSzLzKuAqkmFNj2Mr3Gr9mNYNAz4GPE8y/vZSYJzPoHqnJKZYVP8iZaE6mOLIdfJS7RAseYkTmduxVeClddsD3yCZyvoF4EOoLSkdsDPwc2yMYvziOw/Ny92MicBXgGXYc7gG6+Wa4DOoximJKQ7Vv0hZqA6mGAqTvFR7Aza0KU5kZgL/gYbUN2NvrO0YXwRfAJxL7kaaSBnshI1bXIW9GFcCPwT29RlUjlWAw7ETeC32nC0Bvox1pRaMkphiUP2LlIXqYPKvkMlLtcOAm7FZWh3wHPAFYFufQeXYIOAdwJ0kyd9TwFnAEI9xiQA2S8TnSeo3HPAQtnLqSI9x5cUE4FPYSRs/P08DHwaGeowrBUpi8k/1L1IWqoPJt8InL9WmAJeT1G9sxGpmTkS9MmCzpn6TZEKiuDj/LUAfj3GJ1DQAOA5bO2YDydCom7Gi/+H+Quu4UdjffDPJ5Adro+/fSlATICiJyS/Vv0jZqA4mn4JKXqqNwKZf/htJQ/1F4GqsPVSmZGZ74CPAH0mei2XAj7DhYyKFsANwAdbTEL+QVwG/At5DIYdM1fUqrNfpVpIxnvFY2Y+T68L8dimJySfVv0jZqA4mf4JNXro7APgxXXsdFmJD64/HlkkISR9gH+AzwIMkf/MGbLKDd6H6Fim4fYGvYmuaxC/wTdgwsy9ji2YWcSzkCOBNWDfpEyR/m8MWAf08sIu36DpOSUz+qP5FykZ1MPlSmuSlWn/gGOCnwFKSdsEa4A/YBc29KeZIjInYVMf/A8wn+ds2AndhtS0BX6yVsqpg2fpnsbGQ8TAzhw2z+gs2td7JWG9GnuYC74vNC/8u4PvALJJZ2OI3pt9jUyLv6inGHFASky+qf5GyUR1MfpQyeeluIDbt8mXY0hPVFzpXYj0V5wNvJH8N/0FYm+1s4Bq6XoSOh8r9EhtVM95LhDmWpwaspG8UdmIfBhwMvJquxV0rsZ6MWdH9bOwE+hdWOJeFYdjwtx2AHYHdgT2i+0FV263HepDuxWbXuAub01xwb8fWCzoDKtd4DqbEXAWb4vwDULnBdzQineOmAY9C5TzfkZSX64/Vw+4NTIXKHK/h5MckLFk5BDgo+r7aAuBhrN3zGNbemQM8i7U7sjAuimMHbHbZPbAi/J3o2ku0EvgzdgH6duyi88aMYio8JTDlMhx4HXAgsCd2Au1A7dfBfGz6wkXYeNMXo/uVJInESyQn/ECSYWrDsdm/RlfdxgPbRF93txGbQexhrJ7lT8ADWE2P1KQkxj83BXgUGAeVxb6jEekcdy5wMlQO9B1JOSl5acJELJHZH0sc9qB2T8wGbMHr57G2TvXtJWySIEjWlgOruRmAtaFGYhdoRwNjq+4nUbs2Zw32+TELm6DgjyQjT6QBSmDC9wVsLOWcHn4/DOuZ2Y3kCkF8P4H0XiMbsYRoDkkvz2zsCsij2Mlcyx+iOA7HivVS5t4OPAyVR9M/dtaUxPjlzgLOhMpeviMR6Sy3D3aleEuorPQdTbmEkLy4k4EnoTIrg4MPBT4HfJGeL4KOxy7g7oq1d+LbDsCWKcaylqTNM4ek3fMwdtG2p2TlRazt8y5s+L/UoAQmbO8BvouNsfxHC/tXsKsIY0h6UgaRrD0zmGTY1yrsZN0ELI++j69eLMYK7VrxI2wKxZeBU4GbWjxOD9y9wM+h8r10j9spSmL8cdcC86HyUd+RiHSW64u9r78DKrf6jqY8QkheANyDwA+gckUGB/8psF90a2Uo/AC6jh6J2z3xmnLDgH7R1yuwJGQd1kZZgZ0Xi0l6blrxNFan/BS2rssjLR5HpJAmYyfTab4DadPOdJ068GpSXe/G3Qju/PSO54MK+ztP679I2Wk9mM4KqWDfzQV3QgYHPhlLWvbI4NiddAZ2MXgTNkz/42hxSimJ/lj3/vW+A0lJ9RTKa4B52NTQKXCXg/t+OsfySUlMZ2n9Fyk7rQfTOSElLwBuNbi066e2xYZevT/l4/owiq6zyK7B6oIn+wxKpBMuwcZajvAdSEo+jV1VqZ4PfSM2PG5ge4d2F4ELJNFTEtM5Wv9Fyk7rwXRGcMnLMHAO3I4pHrQvVgQf0myQd2E9MNVLYazChtSLBGl/7IV+sO9AUjSZrnOjx7fV2BjRvVs/tDsH3PS2I8wNJTGdofVfpOy0Hkz2QkteANzkKIFJ8wLrudjMqbVmOS2q0+l64Ta+rQduxmqTRYIxECv2CrFhVT2MrPq2Ibp9kZZW3XWngnsstShzQUlMtlT/ImJUB5OdEJMXAHcAuLXROlpp2AErlj8ppePlRfdhZNW3tdjESMd7i04kZV/Cho4NrbdhAX0a6z6tdTLHY0T/ii2O2QR3FLhFaQaaD0pisqP6FxGjOphshJq8ALjjwD2X0sH6ADOwmdlC1H0YWfVtE5bg/IIw23xSInthWfkRvgPJSE/DyKpv67Au1ybGiLq9wW0E16/+tkWjJCYbqn8RMaqDSV/IyQuAOx3cQykd7APYlMXjUzpe3vQ0jKz7UPoXgEM9xSjSlgpwH5DFnOp50tMwslq3Yxo7pJsYjcettTJvAJTEpE/1LyJGdTDpCj15AXCfAndbCgcaDywD3p3CsfJqS3oeRta9N2Y9NuxMpFDei00fONZ3IBmrN4ws7lJtoh7GDYiGA+2eQbw5oSQmPap/EelKdTDpKEPyAuAuBZfGostXYTOPhb4g+530PIwsHnnyErbgpUihDMe6D8/2HUgHvIqeT+K12NWYNzZ/WLcM3OGpRZlLSmLSofoXka5UB9O+siQvAO5qcJe1eZB9sYb7XikElHfvo+dhZGuwiZu0PowU0jeAh4EAazhqqjWMbD1wL7B1a4d0T4E7JaX4ckxJTPtU/yLSlepg2lOm5AXA3QruvDYO0Af4C7YOXBn0NBvZBqxsoM218ET82BG7ClGm4q3qYWSbsAUtV9HW2E93L7gy9GChJKZdqn8R6Up1MK0rW/IC4B4Ed0YbB3g3Vri/ZUoBFUH1MLJ1WPIyy2tEIm26BlvQqEx2IjmJXwSOBB4DvtD6Id2N4M5PIbaCUBLTGtW/iNSmOpjmlTF5AXBzwZ3Q4s79gX8Cn0gxoCJ4HzZcbA2WuBwSff16n0GJtGp3bOhUG6vQF9bfgXuAraLv3wqspOVJDNzl4L6fSmSFoSSmeap/EalNdTDNKWvyAuBWgzuwxZ3Pwmp+h6QYUBFsiSUs3wUGRD/7NjaULvRJDCRANwD/6zsITwZj42BjFexE/kprh3MXgbu+/bCKRklMc1T/IlKb6mAaV+rkZRi2bEGTC04DMAh4FktiymiLbt+PBVYAx3qIRaRle2JjIHfxHUiOHAO8DLRwddydA256yvEUhJKYxqn+RaQ21cE0pszJC4CbHCUwI1rY+UPAv7BhZGIuBh7wHYRIM64BrvMdRA49BHy++d3cqeAeSz2awlASU5/qX0R6pzqY3pU9eQFwB4Bba++nTemL1b6UZLKdho3DJjGa6jkOkYZsgxWwH+A7kBz6D2ABNsSsCe4ocIuyCKg4lMT0TvUvIr1THUzPlLwYdxy451rY8e3AEjYfRiXwY8o3mZMU1KXADN9B5FR/4Bngv5rbze0NbiO4sqyl0wMlMT1T/YtI71QHU5uSl4Q7HdxDLez4F+CCtKMJxC5YScFuvgMR6c1gYCnw774DybFPYrOUNcFNjMbljsskokJRElOb6l9Eeqc6mM0peenKfQrcbU3utOR6/t0AACAASURBVD+wFhifQUChuAX4ju8gRHpzKjAfFbH1Zjw2xG7fxndxA6LhQbtnFVSxKInpSvUvIo1RHUxCycvm3KXgrmlypx8Dv8oimoC8Bbu43eTweZHOuYOWpwoulZuAJtd1ccvAHZ5JNIWkJCah+heRxqgOxih5qc1dDe6yJnaIR528OaOAQtEPmIfVConkzg7ARmCK70AK4HiavhrhngJ3SlYBFZOSGKP6F5HGqA5GyUtv3K3gzmtih3cDz2MNdOnd14Hf+w5CpJaPA3/1HURB9AdexBKZBrl7wWmKxs0oiVH9i0ijyl4Ho+Sld+5BcGc0scM0oJkemzLbGyvmH+07EJHu/gh81ncQBXI1cGXjm7sbwZ2fVTDFVuYkRvUvIs0pax2Mkpf63FxwJzS48VBgNXBIhgGFZjbwHt9BiFQbh2XWr/YdSIGcBCzGFsBqgLscXJN1M2VS1iRG9S8izSljHYySl8a41eAObHDjJj/DBfgWcIPvIESqnYZl1tK4ocAaoME3S3cRuOuzDKj4ypjEqP5FpDllq4NR8tIYNyxarmDHBnf4CfDTLCMK0OuBl4ABvgPphD6+A5CGHALc5TuIgnkJeIDGu58XAWOzCycElWux7ukrSpTEHAZM9x2ESIHMxN5/D/IdSPZcf+A6rP5gKlTmeA0n3+J11hY1uL3aPc37EzbhwWt9ByISewJ4r+8gCuhi4ObGNnWngnss02iCUZaeGNW/iLSmDHUw6nlpjjsA3Fp7X61rPOCAV2UcVIj+BHzCdxAiYL0Cm4CdfAdSQMdgs5E10NPojgLX6JUhKUUSo/oXkdaEXgej5KV57jhwzzW48UnY9MnSvIux2dtEvDsCWOY7iIIah13FmVR/U7c3uI3gNN98w0JPYlT/ItKakOtglLy0xp0O7qEGN74QNcJbdRLwjO8gOkE1MPm3G/C47yAKaiGwBNilwW37AFtmGlFQgq+JUf2LSGsCrYNRzUsbxtJ4/csUQEO6W/MPYFtsIqOgKYHJvykogWnHP7DnsJ5FWG+NCvmbEmoS4ypYAjPDdyQixVPZCNyDnUOBUPLSpmYSGF24bd2TwEZgZ9+BZE0JTP5NAp72HUSBPQ3sUH+zyjpgBclMKdKwIJOYXbFC0nt8ByJSUDOAqb6DSIeSlxSMxUY6NGISave0ah3wHA21e4pNCUz+NXPSy+YWAo0WYWsq5ZYFl8RMBWZBZbHvQEQKajqwb/HrYJS8pGQcjfXADAUGo3ZPO0rRllHBcv6Nweo4pDVLgNc0uG0pTvrsVK61UXhcZfeVa/zG0xbVv4i0p7oO5lbPsbRIyUuKGh1CFl9wVLundYtp/MJtYakHJv+GA8t9B1Fgy4ARDW67ECUwbQqhJ0b1LyLtK3odjJKXlDWawAyP7tXuaV0z7Z7CUg9M/lWILmtLS5p5/hahGpgUFL4nRvUvIumYAZzsO4jmKXnJQKPD4eOFLtXuaV0FWz8waOqByT9HckJL8/rQ+ImsIWSpKXRPzFRU/yKShukUrg5GyUv63DBgEI31wMSf12r3tK6Zdk9hKYHJv3XAQN9BFFh/YEOD2yqBSVVhkxjVv4iko2DrwSh5yUg8sqGRBGZ9dK92T+sG0Hi7p7CUwOTfQjSsqR3jgQUNbqvnOnVFS2JU/yKSniLVwSh5ydBY7GLsiga2jYeZ6bO4deMpwSxuqoHJv/nABN9BFNhW2HPYCPXAZKJQNTGqfxFJVwHqYJS8ZCwq4K80UteyFFiLtXv+lWlU4Wqm3VNY6oHJv2eB7X0HUWDbA883uO0iYEtwSuxTV5iemKmo/kUkTdPJdR2MkpcOGEfjPQIOW4hxUmbRhK0/sDX2HAZNCUz+zQL29B1EQVWw525mg9svxM6JLTOLqNQKkcSo/kUkXTmug1Hy0iGNTqEc+zuwR0axhG4KlsQ87DuQrCmByb+Z2EKMfX0HUkDbAaOwN8NGLMKu/mgYWWbynMSo/kUkfXmtg1Hy0kGtJDB7ZRRL6PYE/oldNBDxaig2HnR/34EU0GnA7OZ2ccvAHZ5JNFLFvR3cmnwlMW4KuE3ggl/BWKSz3Lng7vMdRcL1B3cDuDngJvmOJnzuanCXNbHDkVjB/4CMAgrZlcBVvoMQid0BnO87iAL6X+B7ze3ingJ3SibRSDd5S2LcWeAaHW4oIg1z+4Bbn486GCUvneduBXdeEzsMBFYCupjYnApW+/I234GIxM4FHvIdRMEMBJYAxzW3m7sX3NlZBCS15CmJcdeC+6bvKETC4/qCWwruaM9xKHnxwj0I7owmd5oG6P24Ofti6+iojldyYwL2otzbdyAF8nZgMU0vhuVuBHd+BvFIj/KQxLgKuHngTvQXg0jI3DRwF3t8fCUv3ri54E5ocqe30dJneKl9H7jJdxAi3U3DXpzSmD8Alza/m7scnJ7njvOdxKj+RSRbPutglLz45VaDO7DJnQZgM4NqOFRjtgCWAcf7DkSkuzcCL2MLFEnv4m7UnZrf1V0E7vq0A5JG+ExiVP8iki1fdTBKXvxyw8A5cDu2sPNFwINYbYf07mPYpEVax05y6R5a6lUonZuBq1vb1Z0DbnqawUgzfCUxqn8RyZaPOhglL/65yVECM6KFnUdjs5E1O/ysbAZhC3Y3W2ck0jFHAKtpqWehNA4H1gE7t7a7OxXcY2kGJM3qdBKj+heRzuhkHYySl3xwB4Bba++zLbkQeAxNqdybL2Brv/T3HYhIb64H7kJdqrUMxN7o2viAdEeBa2bBLclEJ5MY1b+IdEan6mCUvOSHOw7cc20cYAjwNPC5lAIKzc7Yhe1jfQciUs/WWKHWmb4DyaGvY290Q1o/hNsb3EZwGkfqXaeSGNW/iHRGJ+pglLzkizsdXLvLQLwZa6TvlUJAIekP3Atc6zsQkUadgp3Mr/UdSI4cC6wFXtfeYdzEaLzuuDSCknZ1IolR/YtIZ2RdB6PkJX/cp8DdlsKBvgc8BbRSSxOqrwNzsVohkcL4LjbjhGYlg1cDS4GPtH8oNyAaTrR7+8eSdGSZxKj+RaSzsqqDUfKST+5ScNekcKCBwF+xSXo0QgJOBdYA+/sORKRZ/YHfArOAkZ5j8Wkb4BngR+kd0i0Dd3h6x5P2ZZXEqP5FpLOyqINR8pJf7mpwl6V0sK2BfwE/o9x1wG/AkpfTfAci0qqhwP3YGMhRnmPxYRI288avgb7pHdY9Be6U9I4n6cgiiVH9i0hnpV0Ho+Ql39yt4M5L8YBTgMXYwt59UjxuURyFrQl4ju9ARNo1EvgTMJNyDSebAjwLXEfq0yu6e8Gdne4xJR1pJzGqfxHprDTrYJS85J97EFza65O8BpiH9cSUaXrlk7Cel4/5DkQkLVsAt2IN+n/zHEsnHA8sBy4n1Z6XmLsR3PnpH1fSkVYSo/oXET/SqINR8lIMbi64LBai3AmbdfQeYHwGx8+TCrbWy1q0WKUEqC82I8Vq4GzCHB86ALgIO4k/nN3DuMvBfT+740v70khiVP8i4ke7dTBKXorDrQZ3YEYHHw3cjl28DbVudRw2ccFCYKrfUESy9VZgCdYjs43nWNL0GuAhrIDvkGwfyl0E7vpsH0Pa124So/oXET/aqYNR8lIcbli0LMGOGT5IX+ACYB1wGW2tA5c7JwILgOnAtn5DEemMrbEZyl4CzsOmHyyqkcA3sTenK4Hh2T+kOwfc9OwfR9rXThKj+hcRP1qtg1HyUixucpTAdGLtlgOAJ7BZSU/uwONlaSfgFmAVcC7lnKxASu4tWI/FbOB92NTLRbEF8N/AIuBvwMGde2h3KrjHOvd40p5WkhjVv4j41WwdjJKX4nEHgFtr77cdMQD4JLASq415Q4ceNy3bYAt2rsVmV53kNRoRzwZjM1bMxwrePkK+V7KdgBWrLcCmSD6NTAr1e+OOAreos48p7Wk2iVH9i4hfzdTBKHkpJnccuOc8PPAE4FtYTfA9WI9Mnhe/3AP4MTbD2AzgUL/hiOTLECx5eRK7OvFj7CTJQ9dkf+A4bErktdhqu6fh7Q3H7Q1uI7g8v+HJZppJYlT/IuJXo3UwSl6Ky50O7iGPAUwEvobVBT8LXAjs6jGeaiOA04G7gY3ANMKdiEAkFX2AY4BfYlcn5mKr2J8CdPJq9DZYkvIzbJjYUuAKIKvZSprgJkbjdsf5jkSa1WgSo/oXEb8aqYNR8lJs7lPgbvMdBXYB931YMfxGbFj6V4AjsVEqnVABXg18FPgN1v6aA3wJ2KFDMYgEYwTwH8DVwAvYif0Q8FUsodmTdE7uYcC+wKnAd4HHAQc8BfwQm2ljUAqPkxI3IBpetLvvSKQV9ZIY1b+I5ENvdTBKXorPXQruGt9RdLMdNhrlZmAFlkjcAXwGOBbYkXRGf4zDRrmcCfwcW3hzA/AAcDFW1xvichcdoydPqu0GHAG8HhuTuX3087nAP7AampXAMmyGs5eje7DZwYZixffDgS2BycAuWDfuBmwygYeAP2BvGHMy/nva4JYBJ0LlLt+RSCvc24GrgDOg0u0D1E0BHgXGQWVxx0MTkYg7FzgZKt163l1/bEjx3sBUqMzpdGSSBnc1sBgqeV05vh+wP1bsfxjWQzIOm/X0n1i7Zy7WzlmJLaD9MlarUsFmSR2GtXuGYgtq7gLsDIzCZhF7ErgPa/PcBbzYkb+sBDTGX6o9Ft2+HX0/CDsR4xNyEvAqNk9WwJKaOKF5CVgM3AZ8B+txmQ2s78DfkJZFwFjfQUirKtdaJx9X2X2XJGYqMEvJi4h304FLrA6mstJ+pOQlIOOwz/+82gD8KbrFRmJtnl2j+62xIe8jsGRlKDYyZSPWgxMnNXG75xos8XkSS35cB/4OEZGYuxfc2b6jkHbVGk6m+heRfOheB6NhY2FxD4I7w3cUEqY8zEAlkkcLUQ9MACrXAu8BrrAkxlWwoQIzvIYlIkBlIzbN7WHqeQnSWGw0g0jqNIRMpLZFWPe3FF71cDK2xsYp3+MzIhF5xQzgrSh5CdFY7GKgiIh0hrsI3PW+o5A0ubdH60484zsSEYm5/aNZH+dq2FhI3LBoOYIdfUciIlIi7hxw031HIWlz94Lb0NhilyKSLdcf3I3RwsGn+Y5G0uQmRwnMCN+RSJhUAyNS20I0hCwwroItGPYtXqmJERE/Xql52QubXjYvK6RLOsZi0xGv8B2IiEiJuKPAqfgwKG5KNFRlTP3FLkUkO91nG3PngrvPd1SSJnccuOd8RyEiUjJu72hYgya6CIY7C9zMqu+VxIh0XK2pkt0+UX3aMJ+RSZrc6eAe8h2FiEjJuInR+F0NIwtGrfVflMSIdE5P67x0Xw9Gis99CtxtvqMQESkZNyAabrS770gkDa4Cbh64E2v8TkmMSObqLVLppoG7uNNRSVbcpeCu8R2FiEgJuWXgDvcdhaShuv6l5u+VxIhkpl7yAqqDCY27GtxlvqMQESkh9xS4U3xHIWnoXv9ScxslMSKpayR5AdXBhMbdCu4831FIuFSgLGnaEvgQcCywElgLzAUuBIo4G8kibCpIKb7DgOm9b1K5FhzAVXZf0fAHkba8MlXy3sBUqMzpZeOZwEvAQcCtmYcmWRuLfYaGLrR2j0jpHAo8Dbwf6F/18+2wD7DBPoJqj7sR3Pm+o5B29Vb/UnN79cSItK3Rnpcu+6gOJhhuLrgTfEeRsQDbPSLlcgC2WNVRNX73A+yKRAFrSdzl4L7vOwppV736l5r7KIkRaVkryQuoDiYkbjW4A31HkaFA2z0i5TEEmA18r4ffX4eNxzmoYxGlxl0E7nrfUUi7Gql/qbmfkhiRprWavIDqYELhhkXLEOzoO5KMBNzuESmPzwGbgMk9/L4/sE3nwkmTOwfcdN9RSLtqrf/S8L5KYkQa1k7yAloPJhRucpTAjPAdSUYCbveIlEM/rEjtft+BZMOdCu4x31FIO5qtf6l5DCUxInW1m7y8chzVwRSeOwDcWnv/DU7g7R6RcpiKdZN+x3McGXFHgSvDLCoBa6X+peZxlMSI9Cit5AVUBxMCdxy4UGfgmkrQ7Z7i0DTK0o7XRfePRvfDgWOAidjMHL8HVnuIKy2LgC3B9YPKBt/BSEumArOgsri9w2iKZZHampoquRHTgUusjqKyss1jiR/jgIW+g8hI6O2ewujjOwAptL2j+6XAIcBZwH3AlcBG4A7gYD+hpWIhdo5s6TsQaVkD6780qnIt8B7gCvXEiEAGyQt0XQ9GiinkNWBCb/eIlMKd2OXo92JFbd3thi3s1Eb9gU9uQDT8aHffkUgr0qh/qXlcDScTSXXY2GbHVh1MoblLwYXaSx14u0ekHB7CTuQ76XnBpiuBZ7FpBwvILQOnudwLKa36l5rHVhIjJZZl8gKqgyk6dzW4y3xHkZEStHuKQUPIpB1xXcg/6XnM5x+w6QQ/0JGI0rcI6w6X4plKKvUvtWg4mZRVJsPGupsO7Kv1YAprHOEOIStDu6cQVMQv7YgLLB/pZZt/RPf7ZxxLVpTAFFeK9S+1qLBfyqYjyQt0rYO5NaPHkOyEXANThnZPIagHRtqxLLrvbbaR+ApFUetIFqIEpoBcBUtgZmT7OOqJkbLoWPICVDYC92DnsBRPyAlMGdo9haAERtoRL/I4oJdtNkX3RU0CFmHd4VIsuwLjsUZQxpTESOg6mby8YoY9lhTQWMKdRrkM7Z5CUAIj7ZgZ3U/sZZu4yG1+xrFkRUPIimkqmdW/1KIkRkLlJXkB1cEUlBsGDCLcHpgytHsKQQmM/D97dx4mR1Xucfxb2dgMQSABDQgaFGVRIqgBRIJghKvIFRERRFBUxD1yReKCuIF6BdSriAIXBJcIioiC4QKSgAgimyABjSiyKCQsAQIhCZP3/nFOz/R0qqqrl+pTy+/zPHl6pvpU1Tsznap6q857Ti+uAFYAL01ps4l/vSb/cHKhBKaccq5/iaMkRqomWPICmg+mrBo9FqqawNThukekFn4K3ENyMnwKsArYbmAR9ZUdArawfTspjrzmf8m8fw2xLBWQ91DJmWLQfDClYzPAVvg6xKqq+HWPSDWMA34B3EH87LLTcAVrB8a8NxG4H/h6btHlzmaBVfVOUkXlOf9L5hiUxEiJFSF5Ac0HU0a2L9h9oaPoQbtrHqj8dY9INczEjRFrwGkJbQ7A9fV8TdOyjXHDX/6A5MmeSsCmgw2Bacjx0rCjwG5p3y73OJTESAkVJXkBsB3BVqkOpkzsCLCbQkfRg5m0v+aBSl/3iFTDROBS4E7gFSntdgR+5NtehHvEelDu0eXOpoIZmEYiKw2bC/aN0FE4SmKkTIqUvADYWLBHwfYOHYlkZceCXRo6ih5kveaByl73iEgF2ATfHUnjuZdC6PqXOEpipAyKlrw0qA6mXOwkME3qKyISni0F2yN0FJJFEepf4iiJkSIravICqoMpGzsH7OTQUYiICLYILK5YTwqnKPUvcZTESBEVOXkB1cGUjc0DmxM6Cqk+FSaLtKe5YMojwPwvWUVzXV0oZ7vXSN0sJLCg87xk1TwfzLzAsUh7k6nuHDBSIJrIUqS9xSiBKQGLcAnMgtCRJNNkl1IUpUhegGgIuBr3f1uKTwmMiEgx2Olgp4aOQtopav1LHHUnk5CK3m2slepgysOWg+0cOgoREcFOADs/dBTSTpHrX+IoiZEQypa8gOpgysIm+mkHtgodiYiIYLPB5oeOQtop0vwvWSmJkUEqY/ICmg+mLGyaT2AmhY5ERESwQ8AWho5C0hRx/peslMTIIJQ1eWnQfDDFZzPAVvh6RBERCctmgakosdDKVP8SR0mM5KnsyQuoDqYMbF+w+0JHISIiANh0sCEwDTteWGWrf4mjJEbyUIXkBVQHUwZ2BNhNoaMQEREAbKrv1zsldCSSpIz1L3GUxEg/VSV5AdXBlIEdC3Zp6ChERAQAm+C7J20bOhKJU+b6lzhKYqQfqpS8NKgOptjsJDBN0CsiUhy2FGyP0FFInLLXv8RREiO9qGLyAqqDKTo7B+zk0FGIiMgwWwR2YOgoJE4V6l/iKImRblQ1eQHVwRSdzQObEzoKqQcVJYtkswSYHDoIibU7MD90EP0XzQUDONu9RuqaIW3YeOA8YDowE6K7g4bTf7cAy4BdgXmBY5E1TcadK0VyNyZ0ACIlsRglMAVkES6BWRA6knxEc4HDgTP0JEbSVT55AaIh4Grc/3kpHiUwMjB6AiOSzRJAo5BlYhHwYeAgYANgPHAb8AOIftnnnb0Y2AR3UVNRehIj7dQheRm2ADggzK5tb+DdwFrAekAE/Aw4E6KVYWIqlMm4m30iIlIMdgLY+aGjKD5bF+y7YG8H8094bQrY1/xQ1L8Ce3Yf91fR+pc4qomROFWueYkTqg7Gvu+HCV63adnaYJ8Duxlsw8HGUzQ20R/jtwodiYiIDLPZYPNDR1F8dgrYLgnvHe1PcJf3cX8Vmf8lKyUx0qxuyQuEmQ/GZoO9P+X9ozX/iU3zx/dJoSMREZFhdgjYwtBRFJttlZ5M2Dg/mpuBvakP+6vY/C9ZKYkRqGfy0jDo+WBsEdhaKe+P8UlVhYZy75TNAFvhuxCLiEgx2CwwFSemsk+C7dqmzZd8AnNuH/ZXwflfslISU291Tl6Agc4HY88Fe6z9hbldS63nCrN9we4LHYWIiIxi08GG3FMEiWfn4ib8/EhKm3f4BOaPfdjfB+pT/xJHSUw91T15gcHWwdgUf8x6a0qbMWD/Ats0/3iKyo4Auyl0FFIfGkZZJJvFuP8vNS/UTLUOMAl4bUqbp/yr9WF/FZ3/JSsNsVw/tRptLE3zfDA5ixYDd+FGAdw3odFBwA0QPZB/PIWlIZRFRIrHJvjuStuGjqS47Hm+2HWLlDaf8Hcz/7fHfUVgD9Sv/iWOnsTUg568jDbIOhjb3x+3DOx7YM9qem9bsBvq/fQFwE4C0xDvIiLFY0vr3ce5H+xifxFwSI/b2aa+9S9xlMRUm5KXNQ2yDgbADgd7yh+//g42E+xtYGfR16Hhy8rOATs5dBQiIrIGWwR2YOgoyss293VEC8HG9ritmte/xFESU01KXuKFmA/Gtge7s+lpzAKwdQa3/yKzeWBzQkchIiJrsN+DfTB0FOVlPwBbCbZbH7b1U2o1/0tWSmKqRclLsiDzwRwMdibYt/wTYAO7Heylg4uhqOxGsPeEjkJERNZgF4IdHzqKcrK9/Mk+ZTK4zNtS/UsqJTHVoOSlvUHVwdg6vqvYUU3Ldgf7hz+uLacvc1uVmd0Dtl/oKEREZA12OtipoaMoH5vqE45j+rQ91b+0pSSm3JS8ZDOoOhj7UfzTd5vozwvmny7X+EmMLQfbOXQUIiKyBjsB7PzQUZSLrQP2B7BP9XGbqn/JRElMOSl5yW4QdTB2qD+GpUxkae/39X2/zi+OIrOJPonbKnQkIiKyBpsNNj90FOVhY/yF2HF93q7qXzJTElMuSl46M4g6GLsO7L0Z2p0E9lh+cRSZTfMJzKTQkYiIyBrsELCFoaMoD/sm2Jf7vE3Vv3RMSUw5KHnpTt51MPYY2KsytNuuvhfxNgNsRfpTKhERCcRmgWmm4UxsNth/p7zf5XCbqn/pjpKYYlPy0r2862DsgWwF+raZ+/vVke0Ldl/oKEREJJZN9/2cx4WOpNhsf7BT2rQ5rcttq/6la0piiknJS2/yroOxn4B9O0O7g8HOyieGorMjwG4KHYWIiMSyqb6LwJTQkYRj4/zF1h1gr455f4brC566jR3oejhq1b/0RklMsSh56V2vdTBtj2lbgv0bbGbKNjYFu76+5wY7FuzS0FGIiEgsm+C7L20bOpJwbCYjs1C3PEWxaWAP4kbsuTbm3/Vgf/F3S9/exb5V/9IXSmKKQclL//RSB5N2TBtuszPY3/yF+jpNyyOwA8Eudzdv6spOAvth6ChERCSRLQXbI3QU4dhEd6fN7gR7Rct7lzRdCLT79/Iu9q36l75REhOWkpf+6qUOJu2YNqrdumCfBLsCbAHYPJ84zQFbu7t9V4WdA3Zy6ChERCSRLXJ33GTwVP/SX0piwlDy0n+DmA9Gktk8uh6YRaQ7KkYW6cwSYHLoIGpqd2B+6CCqI5oLBnC2e43UBSR3Nh44D5gOzITo7qDhVMctwDJgV2Be4FjqaDLu3CgyMGNCByBSMotRAhOARbgEZkHoSKolmgscDpyhJzF5U/KSn2gIuBp3jJDBUwIjItI7mwy2EOwLYDuBbej6KNtWYG8C+wHYUV1u+3SwU/sbr7RXlfqXPD+bPcWl7mS56ne3MZsE9lv3/0KcvOeDkWS23A10kOs+ZoCdBfYzX295Adh++e5TRGSgbFpK8fiq3i4Q7QSw8/sXq2RTlfqXPD+bPcemJCYX/UpebBPcZLrHgf3Lf2Z27FeU5ac6mDBsov8sbpXjPj4D9lewrZuWbQl2G9jn8tuviMhA2TQ/qsw5uCF9F/mRY74K9vwetz0bbH5fwpQOVGX+lzw/m32JT0lMX/Utedkd7CqwM8FeB/Y1JTCtep0PRrozfFNmUk7bfxduAumtY97bzr+3Vz77FhEZKJsG9sWctn2I6wIkg1Ol+V/y/Gz2i5KY/shztDE7UQlMnF7mg5Hu2AywFe443fdtbwz2EKmTZNoVYDf2f99SdCriF+mMRiEbvJcAU3BFupI7Ffb3TgX7gSwAZoYOomZ8AX9kOWz73cBGwAUpba4FXk6tJ5iuJyUwIp1ZAmwIpiHIB2cmcCtED4UOpD6UxHRPyUtA84GdVAczUFNwo3PmYTf/mjbC2W3+VV0Ha0YXYSKdWYxL/Dckv4O2jKb5X4LQPDGdU/ISmOaDGbw8h1De1L8+mdLm3/71ZTnFIAWlBEYqzqYAOwFbA/cBl0CUdjBsZwnuam4ySmAGYHj+l4Cjc+Wl75/NHCiJyU7JS3jREFhjPhglMIORZwLTeOqe1ltopX99SU4xSEGpC5lU1TiwI4E3AbcCHpBJcgAAIABJREFU3wOWA78De033m41WAo/jHptL/qpY/5LTZzMv6k7WnpKXAlEdzGDleTOvMWDOOiltGk9pchoFTURkYOwFYH8Bmxrz3qvAVoK9oYftLwI7sPv1JbuqzP/SkPdnM08anSxenqONJe5To5Al0nwwg2XzwObktO0D/ec8Zfv2Gd/mgXxiEBEZGBtL6oztdgnY38AmdLn934N9sLt1pTNVmf+lIe/PZt6UxIwWInkBJTBpNB/MYNmNYO/JadtjwG4CuyqlzXn+/8Ij+cQgRaUuZFJB0VCbEasWANOAQ7vcwWI0lPIADNe/LAgdSf/k/tnMmbqTjVC3sWKKhnBdTncPHUlN5FgDE60GPgRMB5u55vv2Ptz/QYBH84lBikoJjNTRPf71lV2uvwTVwAxCFetf2un1szkASmKUvBSe6mAGJ+cBbaLf40aVOwHsbX5yyylg7wceAP7pGz6cXwxSRBqFTOqoccDbpsv1lwAv7FMskmwm9Zv/pdfP5oDUeXQyJS8lMB/4iquDiZ4IHUx12URgbfIbhcyLbvUDnOwAvA03OtkvIHoQ7HW+UYVqJSULPYGRirH3umI+2yelUWPYxfW73MkS1IVsECo2/8tAPpsDVMcnMUpeSqJ5PhjJT6MnQs4JDED0DEQ3QPQdiH7qkhcAXuxf/5B/DFIkSmCkat4AbALsnNJmA/96T0qbNItRF7KcVbH+ZSCfzQGrUxKj5KU8VAczIJNxN10eDxjDy30MvwwYgwSgLmRSNQuBn/l/SRoTXl3R5T70BCZ/Vax/GcRnM4A6dCdT8lJCC4ADQgdRcb6AP7L8dmE7AA9AFDNMso0D9sV1J6tTV2MRqR57LdhH2rSZD3Zv9/ME2HSwIX/wlFxUbf4XGMxnM6SqDrEcaqjkNBpGuT3NB5M/O8INc5zb9rcGW518LrDDwJaDvSC/GERE+sLG+YuJO8BendDmLLCXJLz3Bn/if1MPMUz121A3styUdf6Xdp/PvD+boVUtiRlU8pLluDaq/Vf8Z2Wn/GIqO80Hkz87FuzSHreR8tm3qWBLXcK+xnpTwf4FdmRv+xcRGQib6U/cBnZaQptNcJNbva1l+ZvB7nd3jXqKYYK/K7Rtb9uReBb5Yvc3h46kc+0+n3l/NougKknMIJ+8ZDmuDbcdC/Yr3/bwfOMqO7so/uJX+sNOAuuxy2jbY+aPwPZq+j4C2xXsdrCP97ZvEZGBsYnujo/dCfaKlHbjwD4Ldi3YFWC/dAdH27pPcSwF26M/25LRbBufIKbMWF9UWT6feX82i6DsScygu421+9zYNLDrwW7xyf1TYMtw3WcW4WZDPz//OMvGjnb/zyQfdg7YyT1uo91nf12w4/yx8gqw28Dmou6TIiLdsEVgB4aOopqqWP9SR2VNYopY8yLdUR1Mvmwe2JzQUUg9qQhZpDsaiSw/FZv/pa7KODqZRhurmOb5YOYFjqWK/ChkIoOneWBEurMYJTA5qOT8LzVWpnlilLxUj+aDyZkSGBGRcrHTwU4NHUX1lLn+RZIVvTuZuo1Vl+pg8mPLwdIm5hURkWKxE1Q0mwfVv1RXUZMYJS/VpjqYfNhEP3LYVqEjERGRzGw22PzQUVRPWed/kWyKlsQoeak+zQeTD5vmE5hJoSMREZHM7BCwhaGjqJYyz/8i2RUliVHyUh+aD6b/bAbYCl+3KCIi5WCzwFS82Feqf6mP0EmMkpd6UR1M/9m+YPeFjkJERDpi08GGwDQUed+o/qVeQiUxSl7qR3Uw/WdHgN0UOgoREemITfX9f6eEjqQ6VP9SP4NOYpS81JPqYPrPjgW7NHQUIiLSEZvguzttGzqSalD9S30NKolR8lJvqoPpLzsJrAST04qISAtbCrZH6CiqQfUv9ZZ3EqPkRVQH0192DtjJoaMQEZGO2SKwA0NHUQ2qf5G8khglLwKqg+k3mwc2J3QUUl8qQBbp3hJgcuggKmJ3YH7oICSkaC4YwNnuNepD9xQbD5wHTAdmQnR379uUkroFWAbsCswLHEsVTMadA0WCGBM6AJESW4wSmD6wCJfALAgdiYQWzQUOB87o/UmMkhdpFg0BV+OONdI7JTAiIuVkp4OdGjqK8lP9i7TqtTuZuo1JHNXB9I8tB9s5dBQiItIxOwHs/NBRlJ/qXyROt0mMkhdJojqY/rCJfhqBrUJHIiIiHbPZYPNDR1F+mv9FknSaxCh5kTSaD6Y/bJpPYCaFjkRERDpmh4AtDB1FuWn+F2knaxKj5EWy0HwwvbMZYCt8/aKIiJSLzQJTEWNPVP8iWbRLYpS8SFaqg+md7Qt2X+goRESkKzYdbAhMw5F3TfUvklVSEqPkRTqhOpje2RFgN4WOQkREumJTfT/gKaEjKS/Vv0gnWpMYJS/SKdXB9M6OBbs0dBRSb407xwcCB8S8/2Pgwpjl7wTeGLP8f4mfIOp9wF4xy08lfvK6jwCvjll+MnBdzPJjgJ1ilp+Am7yq1XHAdjHLPwfcEbP8y8ALY5Z/EvhHzPKTgM1jln8U+HfM8u8QP5/IkcCjMcvPBOLuHh0GLI9Z/iNgfMzyt+FnjmsyFvhJTNsVwKExy9cDzopZ/hjw3pjlGwHfjVn+IPDhmOVTgVNilv8T+ETM8mlAXP/mvwCfjVm+Le7v3upW4Esxy18OHOu+XHcMPAm85hzgcuDrMe13AT4Ws/wq4Nsxy18LvD9m+f8BZ8Qs/w/cvBmtLgLiJgLcHzgoZvl5wM9ilh8M/GfM8nOAX8csfzcQd2HwfdzvqIlF8OQ+8Inb/P6bfQv4Xcx2Pg7MiFn+NeCGmOWfAnaIWf4F4M8Jy1+csJ2/Jex3y4Q447pYfAvYNGb5B4CHYpZ/H9ggZvm7cZPytToHWDtm+cHAMzHLW3/v+HYHxyxf22+/1TIfT6sNcPG3egj387baFPf7aXUf7vfJ6Mkub9sErpkNL3w27DUfbvuab/833N+r1Ytxf99Wf05YvkPCdm7A/d1bzRiJc5TfEf9zzST+93A58b+3vYn/Pf+a+L/LfxL/d/wZ8X/3g3DHh1Y/xB1PWh2OO/60OgN3vGp1JLBnzPLvED8H1Edxk062Ogn4Q8zyTwI7xiz/MvCnkW+jIbCr4aJPE//7/CzufNHqRNz5pdUncOejVqfgzl+tPow737X6Lu782Oq9uPNpq7Nw599Wh+LO161+gju/NzPcdUCr8bjrhlbLcdcZrXPATMRdl7R6FPd3bzUZ93dv9W/c373V5ri/e6t/4P7urV6I+7u3uoP48/12uOvCVrfgriNb7YS77mx1He46tdWrcde1rebjroNb7YW7bm41D3ed3eqNuOvyVhfiruNbHYC77m81F7ggZvk7gDfFLD8buCRm+XuAWTHLTwN+G7P8Q8BrYpZ/A/h9zPJRjsd9kFv/zUlo//WE9nEXoI2g49q/K6H9DxPavzWh/YUJ7fdJaH95QvvdEtpfl9B+ekL7Pye0j0uCAO5OaP+chPYPJbRPeiT+dEL7uAK88Qlt4y6WAJ6d0P6BhPabJ7SPuzgEeElC+6RuRzsltL8mof3MhPZJd5feOLrdUoM9DPh5QvuDErb/g4T270loH3ewB5ccxbX/akL7zyS0jzt4gztpx7WPu1AD+J+E9jEHY9sGVhtsHNc+7sIL4PyE7ccdXAF+k9D+tQntr05o/8qE9jcntN8mof2ihPbPS2j/74T2Gya0fyKh/YSE9nFt4y5+AJ6V0P7hhPabJrSPu9gD2Cqh/e0xYR8CNgRLDLZsbR93cQvuIiJu+1cktN87of0vE9ofkNA+7kIQXAIQ1/57Ce0/lNA+7sIO3I2WuPafT2j/xYT2cRdq4C7Q49rHJWXgkrK49ocltP9xQvu3JLS/KKH969dsakfDbY8ltI9LmgCuT2j/soT2CxPaxyVBAPcktN8kof0jCe3jkhqAlTFtVye0XSth24+7t+0csOYL9Y0S2v8rYftbJLT/a0L7bRPaJ3Vje1VC+6sS2u+Z0D7u4hxgv4T2cTcGAA5JaB+XjIBL+uLax90IATg6oX1c8gUuiYtr/+mE9l9LaB+XbIJLyuLaH5HQ/pyE9nHJ9bAxaW+KSDtLiH94JhnMhHseiX/wIJLExgMHwDOPw/rEP6wXSTUfXjIx+Z6ftDGF0U9gRAau0YXsfGLvcnFbwnrnEn+X6+aE9qcTf5frjwnt/4f4u1xJI4f8N/F3uZLi+SLxd7niuo+By0rj7nj+I6H90bgza6u47mMAHwTWjVke130MXBYbd0c1rvsYuOw/Llm1mGVDxD9ajOt+Aq4PVVz7pxPaP5zQPukJz/0J7ZcmtL8roX3SXeLbE9rHPeIHuHF0+/W/BG+/Gs5LupNyTcL2k+5CX5HQ/q6E9hfjfket4rpAgHtSFPde3P9/cHdB4+5y/SlmGbg7SnF3uW6MWbY7PPN/xD+yTrqLfjLxd7niuo+BuwMV97eJ6z4G7klU3IhoSU8IjyG+i1fSCD0fwT3JaJWUxb2P+C5hSf9f3snIcb1Z0v/fuM9a0l3ZpxPar0xovzSh/VMJ7R9IaP/4yJc2Hvf3nw6X7A5L3g0/+AC87jQ47Grf6JGE7d+ZsP3FCe1vSWifdFf5uoT29yS0n5/Q/u8J7ecltE+6a30h8ceNpKHff4rrOtsqbhm4p8hx3TuS7op/D7gsZnnSdcA3gV/ELE86NnwVd23SKu5YdQvYk/DZb8ExrU/zk46dc4i/Dkg6ls8m/jog6dxyFPHXAXHdx8D1YIm7Dkh6gvp2st+0XkX6dUBrF7InEtonXQcsSWifdFy7N6F90nXAooT2ScfZ2xLaJ/Uk+WNC+6Tj/tUJ7e9OaH9ZQvuk89CviD/O3JnQ/nzijwNJ58UfEv//NKknzBnAlTHLk87T38b9DK3iSkZEpD/sQrDjQ0dRPpr/RTqVVLDf6WSXIoDmg+mB3QO2X+goRESka3Y6WFwRnqTS/C/SiXajjSmJkU5pPpju2XKwnUNHISIiXbMTwM4PHUX5aP4XySrrUMlKYqQTmg+mOzYRN33AVqEjERGRrtlssPmhoygfzf8iWXQ6z4uSGMlK88F0x6b5BGZS6EhERKRrdghYUlGsxFL9i2TR7SSVSmIkK9XBdM5mgK1wx3ERESkpmwWm4SQ7ovoXaafb5GV4fSUxkoHqYDpn+4IljbYlIiLlYNPBhsDihq6VWKp/kTS9Ji/D21ESI22oDqZzdgRY0lDZIiJSDjbV9weeEjqS8lD9iyTpV/IyvD0lMZJCdTCds2PBLg0dhYiI9MQm+O5Q24aOpBxU/yJJ+p28DG9XSYykUB1MZ+wksB+GjkJERHpmS8H2CB1FOaj+ReLklbwMb19JjCRQHUxn7Bywk0NHISIiPbNFYAeGjqIcVP8irfJOXob3oyRGYqgOpjM2D2xO6ChEVHgs0rslwOTQQZTE7sD80EFIUdh44DxgOjATorvz21c0FwzgbPcaqRuMANwCLAN2BeYFjqUMJuPOeSJBjQkdgEgFLEYJTAYW4RKYBaEjkSIYZPLSEM0FDgfO0JMYcaIh4GrcsUnaUwIjIlINdjrYqaGjKD7Vv0jDoLqNJe5f3cmkiepgsrPlYDuHjkJERHpmJ4CdHzqK4lP9i0D45GU4DiUx4qkOJhub6KcN2Cp0JCIi0jObDTY/dBTFp/lfpCjJS4OSGAHNB5OVTfMJzKTQkYiISM/sELCFoaMoNs3/IkVLXhqUxAhoPpgsbAbYCl/PKCIi5WazwFTUmEr1L/VW1OSlQUmMqA6mPdsX7L7QUYiISF/YdLAhMA1Lnkj1L/VV9OSlQUlMvakOpj07Auym0FGIiEhf2FTfL3hK6EiKS/Uv9VSW5KVBSUx9qQ6mPTsW7NLQUYiISF/YBN89atvQkRST6l/qqWzJS4OSmPpSHUw6OwlME8BKIWTt8jIGeDXwVuBFwGNA5P9NxM1e+zPg3ozb+wqwHfAcYFPgXODYzFGLFEq0EuxxYApwe+hoCugluN/N1aEDkUEJMUllv0RzwQDOdq+RLtjqYwFwQOggCmwybuJmkVLYHbgF+B2wJzC25f2NgI8CDwKn+u/beRvwaeB+3Bniv/sVrORiDHAxLkHdI3AsBWWLwA4MHUUxqf6lXsr65KWVnsTUj+pg0tk8sDmhoxBpJwJOAoaAj2dovyHwW2AJsEvGfXwEJTAhTAE+1EH76bi/kwFn5xFQ+dnvwT4YOopiUv1LfVQleWlQElMvqoNJZzeCvSd0FCJpxuIe/xvwzg7Wm4DrTrYceFOG9gehBCaEVwH/00H7McAFwF3AbrlEVHp2IdjxoaMoHtW/1EfVkpcGJTH1ojqYZHYP2H6hoxBJ82VcYvHdLtbdGNc1bBmwTZu2b0MJTAgH0FkCI23Z6WCnho6ieDT/Sz1UNXlpUBJTH5oPJpktB9s5dBQiSfYCVgNLgUldbuMIXGJyBzA+pZ0SmDD+ByUwfWYngJ0fOoriUf1L9VU9eWlQElMPqoOJZxP9dAFbhY5EJMkNuKTihB62MQG4z2/nyJR2SmAGbxxuFBElMH1ls8Hmh46ieFT/Um11SV4alMRUn+pg4tk0n8B0e2NbpK/GtHz/H8CO/uuf9rDdlcCF/uvPsObIZRLOEbihEKW/FuMGR5BhFuFGMVwQOhLJQ5mHSu5WNBc4HDhDSUxVRUO4Id93Dx1JwUzGXds9HjoQEVgzgXmrf30YuLXHbV/pXzcDZvS4LemPtwGnhA6iopagxLCV5n+prDomLw1KYmpgATAzdBAFMxlYApGFDkQERk9kGQH7+K8b3ch6cX3T128ErsmwzouA9+AGAhjCTXT5OHAacFXKeuOA9wJbAav8sjHAusA3cZNmrg/8IGbdCX7dfXCJ27P8fq8ATgYebWn/FWB73+Y5uK52/wO8GHifb7Mj8Gtc17izcYnDCv8zjfP7/APxB8gTgNm+/WpgbeDbwDH+/Um4ST8j//WGuL/VicCf4n45wLtwwyZvw8jTsKNwv+tmv8fN9dPsHcBbmn7eBbQfmW4fv94zwNP+ZwD3+/9twjqfxiW6m/r9nI/7PawFHI1LhNfC/e4fxv1d56XE0MtnohtLgA3BxkH0TJ+2WXYzgVsheih0INJPdU5eGjTZZcXNB77i6j6iJ0IHUxBT0CSWUlBTGZnr45w+bG/tpu0lXWg218AcBnwVd1HZbD/cBIpnED8gwIa4ZOA/Y95bD5dAPI2bbLPVVsBC3MVy8wSczwIuAx4AdmhZ5y24BKIxCedHgVcCZ+ESk8l+f+aXA2wAzGHk97ELaz79algH93sw3FOsHRlJNLfFXThsFhPTo8D3GJ2UxjnMbztrDcyuuATiTr/eT1LabgBcgpv0cmrLe8/DJS/n4f4urd4A/BduqOZGfBvh/jYvbGl7im9zREIcvXwmumRTff9gdSMbpvqX6qlbzUs7qompJtXBrMmOBbs0dBQicV7OyAV2v7oZLfPbS3oy0EhgbscN3ZxkW9zcMj+Pee8nuOQgyTq4BKj1YnUq8C/gJuJrdNbCXbTfx8gThGYf9bEfA5zr9wPwbNwTghW4pzINEfAXv84rUuIF2NrH1jwKylq45GDdhHWO8tv+Zpttd5rANOxPegIzEdftcB7JydkE3JO063A/T5xD/H6+jUsK4woGx+EStvsS9tXtZ6IHNsEPF7xt/7ZZZpr/pXqUvMRTElNNmg9mNDsJTE8apTCaL/6a52lY1dqwS43ttJsDYgj4bMr7t+PmpNmf0aOaRbg793emrLuc+AEJzsJ1VTrS77/VCuBbuETnqJj3G49S3wxc5PcD7sJ6c2CTlrgM94QE1uy61WpX4Hig+dH1Ln67b09Y5zS/7yNxT0L6bUmb90/DJZofxHV9i7MSlzS8Cvh6QpvG73Uv4HfAYzFtnsF1SZzKyFOuhl4+Ez2IGsWNegLjqP6lUtRtLJlqYipKdTCjTUZdyKSgXsHIE5jT+7C9sbgLWQP+nNCm8QQmSx3CDr7t/bg7+eDu4q+mfZe3/Rl9t/3Vfls3t1lvA9/u7pj3GrE/QvLThFYb4S6en2D005VWv2HNblYfYeTv09qFrGGBf3/7lG13+wRmV5KfwDSe3iXVt7S6CZfcxo0nv6ff1krS5xA627fbv2V5t5+JPrBFYAf2d5tlpflfqkNPXrLRk5hq0Xwwo9k8sLSeDSID1fwE5oGmrzfsw7Y3xN0NB/h3m7ZJd+yb/Qk3ueZzgdf4ZSuAa4FDcd24XpSw7pW47lcNjSL0dgMLLMU91XgeyV23/uTjyOJh4Ge4GpuDE9psDywCnmxZPh/XrWw+8GDCuvf61zyewKR5t3+9LmP763DdwNIGA1hE+pPAxnvrtCzv9jPRDxqJbMTuuM+qlJqevGSnJzEVcwuuG/yuoQMpCD8KmUgxNCcw/2JkxK2t+7Dt5sLrpCcwnTBczQPAbk3L3wX8HTfq1V+A23DdmQ4Fnu/bPIq7IG5ojO9+T4b9/hOXiCXNPntfwvIkp/nX9ye8/z5Gupo1uxXXZWoPki/sGyPHPbvDmHq1l3/NenBrtNsrpc0/ug+nq89EPyxGCQya/6UqlLx0TklMdWg+mBZKYKRQmkesGgIuBQ7CDbf7bNYcQrgTzXctLu5hO80e9q/NI1z9FTf87qdxJ47t/L9GrcwfcMlCc3eWRhesPXF1MGnm+39J/3E77RN6Da6mZwdc/UbzcNPrAlv499NMwCUyu/n2D+GeYjUmIU3repWHzf3r0oztG3Utz0tp00sdVjefiX5YgmpgQPUvFaDkpXsaYrlCFgAHhA6iIFQDI4XSOuTuz3EJTOMO6oU9bHsP/7qE9DlcOtGIt7XL2RLgY8AngZ2AnXGF4rv712t8PI1koXGB/wtGnoh0K0v3t1an4WpQjmR0AvM20ocpXhc32MF7cd2kfgJ8jZGZcafgLh4HrfF3yZo4ddq+G51+Jvq1z9Yhn+toJpr/pcSUvPROSUxFzEfzweDrgNZGT2CkQFqHoL0ANy8KuIvkbm0BzPJffxVXkN0PjdHM7vev4xk918cK3IXp14G34u7wH4urkziPkQvnRr1PqOK8c4GncMli8zDBbyZ+qGiAF+C6kX0U181sX+DHjCQv/fB6H0OnGjVOWWunGjU696e26k63n4l+UA2Mo/qX0lLy0j/qTlYBqoNxGj0LlMBIYbQmMKsZmT9jH7q/m/8R3Chk/wRO7XIbrcYz0vXrSv+6PnBcyjpP4xKoL+KSqkYXq8YToc3jVhqAx4C5uCcqjRPbS3HJY1yyNx7XDW8abnb6CxK2G8UsS6rdibM2yYMVpGn8Ptt1x2todAHs15O5Zt1+JvphMbXvQqb6l/JS8tJ/SmLKTXUw3mTctUk/b5iK9CRuEsCLcJMhRrjhlJMmJUwyHfgw7s73AYzMj5ImbiLJVrvghhb+B6NHD9sB2LTNuo1uYo2nHef61z1i2sbZN8M+OtWIqVGX8V7g+wlt98NNivkQcGbKNhuJQXMi88WWNs/417jJOdfDPRnqVGPI4qx3qRrtftTFvrLo5jPRD3oCo/qXklLykh8lMSWn+WCGC/gja9tSZECSkpP/An6Nu9D8cgfb2xh3URrhRoK6IeN6Lyf+6UGzRpe2OYyeeLKxrzSN/3S3+tdLgStwhd2zYtcYMQH4OP0vXvsjbh6a7XGjcU3GjZwVZxv/+ldGEpBWY3C/R3BPIRpaDzh3+9e4p0+b4Z6adepy4DLc04yXt2k7E/ck6Tzc7yAP3Xwm+mEJsCFYP7ullc1MVP9SMkpe8qckpsTmAzvVfD6YKaiAX0pkLK77l+EKxdtdlG2BGy75MeB1GffRmAzy28CnUtrtieve9q2W5Rv59ZeSfuf7KNwFdrNNcBfzdzFSWxPny37/rQ72+/5myrrtvM9v4wHcU5YkWSbNPBr4jm/XPNlU65DMY3Gjuf2bkQlBG+ambP81fttJM9hPwT0duz5lGxNxCcNCkp98zPL7uSjh/Yb/9e1a55Lp5TPRI5sKZmA17kZmPwX7RugoJCtNUjlYmuyyfGws2KNge4eOJBw7FuzS0FGIdOr1uMTkZlxdTGt3r42BY3B9I88iex0EuAvzxqABH8eNGtX8JGY87kLzEdzoW60aF6sfw00Q+eKYNm8FbkyI6znA73Bzhby65b0pwH/jCu1brY3rgmR+21vSXe3Is3C/t3tJ70Y3FjjD7++HLfsag/sdvR9XRP8Qbqb7ybiRtz4Zs70P+G19rmnZm3FJUJzxvq3h5lRJmmem8fv8PWv+LbbDPZG7lORi/wnAiX4/d+GSzDgb44orDfd3aE6Yev1M9MAmgK0G27a/2y0Li8AeAOtmIAgZOCUvYSiJKR+7COzE0FGEYyeBaSQ9KZR23bYaxuK6hrwV1/1nKa4b11jciFL/h7tY7HTywd1xF5Fz/fe74pKa5bgL0S2AO3FPXuImHVwf+AZuJviJuKc4L8Q9YXgGd0G8EPdkImkktAh4C26Sw0m4x6SrcUnTN3Ddtprdhpvd/Rn/bwzu6dQ4/3Mcmu1HH3YK7gnMVzO0/Q/gENyIZIt8jKtxI5c16oK2wz01moor9v8K8UM9vw2XsDyJ67q2EDgppt08XK1Q4+cdi0toFpM8CMJbcIMTjMPV1KyDq4n6Aa5rYpz5uISr+fc63se+O6672fNwn4cxvs2Q38dY/zNsQ38+Ez2wpcCbIbqybdPKsW1wNzumqAtZ0anbWFh2EHA28B4NsVwGdjRwAEQ7h44kDDsHeAiij4eOREREcmGLwA4MHUUY9gGwfk8OKn2nJy/FoCcx5WE7gq2qbx2MzQOb076dyODUudhYJA91HolM878Unp68FIcmuyyR5vlg5gWOJQQ/CplIcXQ6RLKIpFtMLRMYzf9SfEpeikejk5VD7eeDUQIjIlJtdjpYvyZvLRHbxg9gkDainwSjbmPFpu5kxWdHg10bOoowbDlYTet/RERqwU4AOz90FIOn+pfq4+wLAAAgAElEQVTiUvJSDkpiiq2udTA20U8PsFXoSEREJDc2G2x+6CgGT/O/FJOSl3JRElNcdZ0Pxqb5BCZp7jYRESk/OwRsYft2VaL5X4pJyUs5KYkprjrOB2MzwFb4OkcREakmmwVWs2JH1b8Uj5KXclMSU0x1rIOxfcHuCx2FiIjkyqaDDYHVaIhy1b8Ui5KXalASUzx1rIOxI8BuCh2FiIjkyqb6/sJTQkcyOKp/KQ4lL9WiJKZY6lgHY8eCXRo6ChERyZVN8N2ptg0dyWCo/qU4lLxUk5KYYqlbHYydBKZJVkVEqs+Wgu0ROorBUP1LMSh5qTYlMcVRtzoYOwfs5NBRiIhI7mwR2IGhoxgM1b+Ep+SlHpTEFEPd6mBsHtic0FGItKpRobHIwCwBJocOYkB2B+aHDqK+bDxwHjAdmAnR3UHDkRxFc8EAznavkbr1hHELsAzYFZgXOJZBmIw7p4kUypjQAYhU0GJqkcBYhEtgFoSOpJ6UvNRPNBc4HDhDT2JCiYaAq3HHvjpQAiMiUg92OtipoaPIn+pfwlG3sXpTd7Kw6lQHY8vBdg4dhYiI5M5OADs/dBT5U/1LGEpeBJTEhFSXOhib6KcF2Cp0JCIikjubDTY/dBT50/wvg6fkRZopiQmjLvPB2DSfwEwKHYmIiOTODgFbGDqKfGn+l8FT8iJxlMSEUYf5YGwG2Apf7ygiItVms8AqXvSo+pfBUvIiaZTEDF4d6mBsX7D7QkchIiIDYdPBhsAqPEy56l8GR8mLZKEkZrDqUAdjR4DdFDoKEREZCJvq+w1PCR1JflT/MhhKXqQTSmIGpw51MHYs2KWhoxARkYGwCb571bahI8mH6l8GQ8mLdENJzOBUvQ7GTgLThKkiIvVhS8H2CB1FPlT/kj8lL9ILJTGDUfU6GDsH7OTQUYiIyMDYIrADQ0eRD9W/5EvJi/SDkpj8Vb0OxuaBzQkdhUicChcZiwS1BJgcOoic7A7MDx1ENdl44DxgOjAToruDhiMlFs0FAzjbvUbqCtR/twDLgF2BeYFjycNk3LlMpHDGhA5ApKIWU8kExiJcArMgdCTVo+RF+i2aCxwOnKEnMXmIhoCrccfEKlICIyJSL3Y62Kmho+g/1b/kQ93GJE/qTpafKtfB2HKwnUNHISIiA2MngJ0fOor+U/1L/yl5kUFQEpOPqtbB2EQ/HcBWoSMREZGBsdlg80NH0X+a/6W/lLzIICmJ6b+qzgdj03wCMyl0JCIiMjB2CNjC0FH0l+Z/6S8lLxKCkpj+q+J8MDYDbIWvexQpHI1CJpKPJcAUsC2ATYGNcQWRT/vC2oKznYDvAVcBv8UVqj4XmOK/lp6oYF9C0ehkOVgAHOC+tBcBM4G9gGdD9LpgUWVmGwMHAo8CD+LOXy9yr5GFjExERHJj64CdAXY52G1gD4Kt9I/fDWzIF0OuBFscOtpsbG8f9yp/F27IP31ZDLYv2AahIyw2Gw+2ecp7evIigWV5EmPPH1w8ZWUvAvuyH9zkYf+6zB8zHw4dXTa2lz9XPeWP99b073Gwf4LdAHYx2BGhoxURkb6wdcCeaDnox/1bDfbz0NFmYzv5eFt/hqGmhOYOsFNU5BnHPu+T1pbhVZW8SJEkJTEWgX0X7Bn9/45j++PqAR9qSljijvl3hY40G5voj+ntzmFDYN8LHa2IiPSNfd1fsKYd/J8C+0DoSLOx52c4mTX+HRo62mKxDfwFzWp/ceiTGCUvUkStScxw8rIC9wRWXczWYFdlvOD/Q+hIs7PbMt6E2z50pCIi0je2RYYT2mqw7UJHms3wEJpp/1aB3QymCXFHseObktnV/kJwlpIXKa7hJOZQsO+wZhfYF4WOsFhsO9zTqXbHyF+GjjQ7+zprdh9rffpyVegoRUSk7+xif1GfdAJ4olwX+6k/SyOB2TZ0lMVik1izO8lqRmqItgwdoUg8e7u/KG+9MF8Bdm7o6IrHvuyTvrQbVqeFjjI7269NArMS7E2hoxQRkb6zPdvclft16Ag7Yw+n/CxPg30+dITFM1z7knQB8NrQEYqsySL/5CXp+KWnMGuwtcDuIvnJ+9NgXwwdZXa2IfF1j41//wYbGzpKERHJhf0l4STwFNjs0NF1xv6WcjHzD7C1Q0dYLLFPX1rvyK5QEiPFMpy8rEz57K5AtTAx7DUpCcyTYB8LHWFnbFHCz7Ic7OjQ0YmISG7sSOK7FQyB7Rg6us7YtQkns2fAXh06uuKxL6Q8fWlOYpoK+0VCGlWw366eYwiNSBbDzko45j9J6QY4SUxkV7gnNCIiUlG2LvFDKj9VvsfvdmHMz/E02OmhIysee7a/YGl3EdhIAK8JHbEIbg6TxnxP7T63egoTyyYxMpxy67Fyn9DRdcYOjEnGVoB9P3RkIiKSu9ghla8IHVXn7LuM7h7RmKhNk1iuIdPTl6dxk8IdD7Z+6IhFHHsB2P8yMsdTu6cwqoVZgx0YkwQOgb0idGSdsU1j/uYaOllEpB7WGFL5KbBjQ0fVOftiy924VWD7h46qeNrWvihxkRLIlMjoKUwiuyTm9/aC0FF1zu5pScKuDh2RiIgMjF3MyIg+z4DtEjqiztnHmp4qrAT7v9ARFZN9IabbhRIXKam2iYyewsSyzWOewpbw/72d2XTuWgm2X+iIRERkYEYNqbwCbELoiDpnhzZdwDwFtlnoiIrHJrFm7YsSF6mAxERmlZ7CJLGPMFIE/wxYFDqiztk7m34GDZ0sIlIvFuGGGjawP4aOpju2D67/8zNgHwgdTTHZlxjpLrhCiYtUj70E7MeMLvZfracwcWwM2M3+d/RE6Gi6Y1s0/Y3/K3Q0IknGhQ5Ays72BHYKHUVB3QpsCawC+2TgWLqxORABjwHrl/RnyNOzgDnAGGAZ8CXg2xA9GTQqkb6K7gAOxk3K+HngANxxYR7Y94KGVkzzgZcBY0t8zFyFuz6cWOKfIU8rgPMg+lfoQOqshI83pVhsKTApdBQiAZ0MHKfERerBXgLMA54XOhKRgL4JUckmKq0WPYGRXk0A7gU0S2+8CLDQQfSg7PHn6VPADsBXlLxIfUR3gH0H+CpwCnBt4ICKqszHzsbN7bLGn6dpwIm4ax8JSAmM9MPjEJ0fOgiRwbJ34RIYkbq6Tsd+qZeyze1TXWNCByAiIiIiIpKVEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMCIiIiIiEhpKIEREREREZHSUAIjIiIiIiKloQRGRERERERKQwmMiIiIiIiUhhIYEREREREpDSUwIiIiIiJSGkpgRERERESkNJTAiIiIiIhIaSiBERERERGR0lACIyIiIiIipaEERkRERERESkMJjIiIiIiIlIYSGBERERERKQ0lMCIiIiIiUhpKYEREREREpDSUwIiIiIiISGkogRERERERkdJQAiMiIiIiIqWhBEZEREREREpDCYyIiIiIiJSGEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMCIiIiIiEhpKIEREREREZHSUAIjIiIiIiKloQRGRERERERKQwmMiIiIiIiUhhIYEREREREpDSUwIiIiIiJSGkpgRERERESkNJTAiIiIiIhIaSiBERERERGR0lACIyIiIiIipaEERkRERERESkMJjIiIiIiIlIYSGBERERERKQ0lMCIiIiIiUhpKYEREREREpDSUwIiIiIiISGkogRERERERkdJQAiMiIiIiIqWhBEZEREREREpDCYyIiIiIiJSGEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMCIiIiIiEhpKIEREREREZHSUAIjIiIiIiKloQRGRERERERKQwmMiIiIiIiUhhIYkSBsb7BVYMvAHgZb0vLvEbDlYBeHjlREpFrsBWBPgVnLv6fB3pJh/c1828fBHmo6Zi8DWwn2w/x/hk7pnCMi0sSeAvtz6CjKx8aAbQS2Ddg7mk6gK8HeB/ZCsClgY0NHKknsEv83mxw6EpHBsmP8Z//A0JF0z54NNhVsFtgzTcfg6zKuPxXs9WC3+PX+CvYusJeBrZVv7N3QOac/7BX+93Zq6EjqblzoAETqKVoNPOz+2aSmN+ZB9P1AQYmI1ET0KPAo2HOAy4GpwHbAq8CmQ3Rzm/XvB+4H2xA4CdgFoofyjbkXOudItagLmUh4uzd9fVmwKERE6mcP4ArgO03L3t/B+s8Bzix28rIGnXOk9JTAiIS3R9PXVwaLQkSkfvbAHXd/BCzzyw4GWz/j+q8GLskjsBzpnCOlpwRGJCgbD+zqv1kC3B4wGBGRGrFxwPbAzRA9gUtiAJ4FHJJh/QjYEbghpwBzoHOOVIMSGJGwXgms57++EiILGYyISI3sBNwE0ZD//rSm947KsP7LgDsgWtX3yPKjc45UghIYkbD0KF9EJIxG9zEvugX4g/9me7BdOlu/FHTOkUpQAiMSlk4mIiJh7AH8tmVZ81OYdsX8SmBEAtEwyiLB2FrAzv6bf0P0ly6381zgHcCbgPHAU/6N3wOnpI+OY5sB7wZmARGwElgNfBuiX3QXj4hI0dl4YGvgtpY3fgqcAmwAvBVsNkQPx6w/FtgBuLHL/W+Mq7N5M+7YGwErgKuBr0H0dHfbTd1nAc45oPOOiBSAJrLsnu3eNJlYFzM323iwT4E9BnYy2CYt7+8JdlXLmP+N99b26zwJ9m2wLZremwp2A9i7O4+pTjSRpdRVJSayfDXY+QnvfaPp2PzxhDY7gf26i/2OAfsQ2FKwuW4CyeH3tvDH3j+Dbd35ttvuO+A5B6px3tFEliIVoQSme3Z808nkiA7XnQR2OdgTbjboNd5/IdgfwZ4GO6jlvc3B/gS2Cmy/hO3vCXZHZzHVjRIYqatKJDCfBftAwnsvaTo2/9WPNtba5hNgR3e4z7XAzvPbPSahzYv8sfl6l+z0U6hzDlTnvKMERqQilMB0zxY0nUxe0MF6a4P9wa/3joQ2ZzZt+41NyzcGuzP9ziL4u4uPZo+pjpTASF1VIoH5rUtUEt+f33QM3TPm/UvAXt7B/saA/dpv77Q2ba/z7Q7Pvv1MMQQ450C1zjtKYEQqQglMd2wdf6fKwP7Z4bpn+/UWpLQ5FOxhsB+NvotnF/p1b8f14U5a/3NgczuLq26UwEhdlT2BsbXA/t6mzUFNF+QtXc1sHNg9nT0hsS/5bd3t9p/a9se+7VnZt992/4HOOVCt844SGJGKUALTHXtt08mxg5OU7da0XsJj+MR192la9/CUdgeB3Qu2eWfbrxslMFJXpU9gZrokIbXNBLAH/c+5Cuw5Te/NAPtlB/vb2m/DwI7L0H6+b9vHCTJDnHOgeucdJTBFoVHIRMLodijLL/rXJcCvOtxnY2K2p4ELcP261wI2ArbBTcp2AHAvsBtE93a4fRGRMogbPrlFtBLsf4FjcddKRwBfalq/k+P2F/w2VgNnZ2i/TSOIDvbRTohzDui8IzlRAiMSRhcnE5sKvMZ/cwlEq7PvziYCjcLLx4DG3cMngSeAu4Drgf0gejD7dkVESmcP4F0Z2n0f+CQukXgf2IkQDfn1P5FtVzYRN9wwwBUQ3dOm/cZA46luh129Ug34nAM670ielMCIDJytB7zSf3NXB3ecZjJyR+6qDnc6DZjgvz4NouM7XF9EpAJsHeA5EN3Vvm30D7BLgb2BzYF9/PcvBm7NuMPXAGv7ry/P2L7hsoz7aCPIOQd03pEc9XmIPhHJYFfc5F/Q2aP85zV9fWeH+9ys6es+9qsWESmVXYFrOmjfPGLY+4FXAX+EyDKu3zTXCbdnaP8W/7oCuCDjPtoJcc4BnXckR3oCIzJ4zY/y2/TDHmVV09edzqC8rOnrRR2uKyJSFZ3Wr/wauB+YCuwDPNjh+s3aJDA2GWgUyn+rj92qQpxzQOcdyZGewIgMXvPJZH4H6y30r88Aj2RbxWaAvbJpXXCFpFnWfS7YVtnDExEpvA4TmGgION1/MwZ4d2fr0zxK57/atP0EsB7wd+DLHeyjnRDnnOb1QecdESkWDaPcGZvYNJxmhzMO20Zgj/l1p2ZoPwvsZrBJ/vs7/LqvzbDu9mC3FH9Iy5A0jLLUVVmHUbaNwbp4EmBTwZ7xP/ODfiStrOuu33TcfmFKu53BVuBmun9p5zEmbjfgOQeqd97RMMpFoScwIoM1k5Gum1d3tmr0MCPDeL4zuZ2NAXsPMAd4HUSP+Tdm+9cD0vdj++LuOB6gIS1FpEI+BiztfLXofkaGEJ7fQf0LED0OfMZ/k5Dw2ba4epdHgT0gyjpAQBYzCXfOAZ13RKSY9ASmPVvPPxbfxU1MNjyp14/BXgY2BSxjPZqNBzvN/973b3lvY9xsyL8F+zDYhJj1j/frvr1l+QZgB4DNw82GvH5XP2qt6AmM1FWZnsDYJmC7ujvmtto/Sfm4v9s/vv36w9t5vf+Zj2rfdo11x4B93z9d2bdp+TpgHwRbBnYB2GbJ2+hofwU650C1zjt6AiNSEUpg0tlBYENgT4ItBXu06d9Sv/wZsLkdbvc//UnjL2CX+38X+5PIem3W3c3tz64Eu8yv+xuwo90JRbJRAiN1VZYExl7qk5aVLcfgx3Hdqr7YfhvD24rAbgSb1kM8bwT7uT92XwF2Hdh3wWZ0v8019lHAcw5U57yjBEakIpTASF0pgZG6KksCI9JvSmCKQjUwIiIiIiJSGkpgRERERESkNJTAiIiIiIhIaSiBERERERGR0lACIyIiIiIipaEERkRERERESkMJjIiIiIiIlIYSGBERERERKQ0lMCIiIiIiUhpKYEREREREpDSUwIiIiIiISGkogRERERERkdJQAiMiIiIiIqWhBEZEREREREpDCYyIiIiIiJSGEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMBIDdnmYB8DuxDsJrCHwJaB3Q52OthOMevsB3bi4GMVEZH+0LFfpCqUwEiN2D5g1wD/BI4B/gacCOwNbA0cDPwJOBfs62Dj/HqvAn7i3xMRkVLRsV9EREaxp8D+HDqKdPYysMvBzN9peyPY2JT2E8DOBjsN7PlgD4KtBpsysJClBOwS/5maHDoSkcGyY/xn/8DQkaTTsV/6zV7hP0+nho5ERHpS9ATG3gu2AmyVP+mOz7jeGLCFYEv8wUp34KSFEhipqzIkMDr2Sx6UwBSFupBJRdk4sO8B3wdWAm+E6GsQrcq2frQaOB7Y2C+4PIcgRUSkr3TsF6kDJTBSVWcA78OdwPaG6NIutnE+8IT/WicxEZHi07FfpAaUwEgF2QnAYf6bD0J0TXfbiQxX7LkKuLovoYmISE507Bepi3GhAxDpL9sfmOO/uRiiM3rc4D+AZRAt63E7IiKSGx37RepET2CkQuxZwDf9NyuBj/Rho2NRFwIRkQLTsV+kbpTASJUcD2zmv/4FRH/vwzbvBC5IftveC3Yd2G3+DmDze/uALQC7Fux6sCOz7dI2AzsO7Hdu7gK7EuwKsDdnD9veAHaR3+/lYOeBbZlhvQjsJL/PW8HmjB521Db17//Wt7kZNyncS1u2szHYZ8Eu87Hf5EftekX2n0FEJJPjGeixP4/jPujYLyIyMEUZRtnWA3vCD29oYHsNYJ/Twb7jh938NG6+gN38ex8G+xvY9v77jcDuTj8R2dpgJ4M9CfZtsC2a3psKdgPYu9vEtCnYb8AWjz6x2kv9CXEzv53fJaz/JRc7gB3sf5fH+e/f6U/aM0dObPYssBvBngbbzi/b27eb5U6KALaWP/ENge2c/jOUhYZRlroq0jDKgz729/u4Dzr2l4mGURapiMIkMO9oOoH9feTgmes+zwNb33/9Wr/vS91B2v4F9oKmtmf693+WsK3Nwf6Em7Ngv4Q2e4LdkRLPFmD/AHt45AQ66v13+t+Ngf0z5v0p7gQ3/H3jQP0vsPeAnc/wDNWj1vuob/cTF7vNA3t2TLtDRn5HWdkkf/LbJvs6g6IERuqqUAnMgI/9/Tzug479w20isI+A/R43D88isAuSfyehKIERqYjCJDC/aTqJfXUA+9sM7EdN3+/s9/0k7tH6zjHv2eh1ht/fGOxO//7HU/a5E9ijCe+tj7vzZ2AHp7RZ4ducFfP+bLBPNH1/WFPcV4GtnbDd9/k2D+G6PkxIaLffyEkxjW2Cu4N3nD+BGtiO6euEoARG6qpQCcwAj/39PO6Djv3D768L9l2wt4P50gabAvY1v96v4hOjEJTAiFREYRKY+5oOuG8ZwP4+NvrkbUc17f/DLW23ALsD97h9q5htXejXu51RfY7XaPc5sLkJ7/3QbyOhewDgnmY849sdGvP+1e4EPfz9N33bZWBTU7Z7im+3CmzblHYf8+3S7iTu7k+YZ4K9rukEpgRGpDAKlcAM8Njfz+M+6Ng/aju7JLx3tF+3IAMqKIERqYgiJDA2Ade/tnEied4A9nklw90IAOz0kQN02oloje3s0xT34SntDgK7F2zzmPdmNG3jPSnb2K+p3WYx73+x5furfdtvtfkZLvPtzmvTrnGi/UV6u1HrnKgERqRoipLADPrY36/jPujYP/zeVmDfSFl3HK47mYG9KX0/g6AERqQiCpHAbNl0cF7ep21ulH4ybL3TZ7f6/X++w/1cNBK3rY/rB7y2u+tlrwP7L1xR5PkkjiRjv/LbWAm2Ycq+vuXb/SVDXOsx0uXgjSntxoE9nuEkHIE96Nt9sP3+h9dTAiNSOIVJYLYc7LG/X8d90LF/+P1Pgu3aJqYv+fXPbR9/3pTAFIUmspQqeKLp60f6tM3P44bQvCf+7ejnI1/b5kCjcHJe9l3YROD1/pvHgF/6r5/E/Ux3AdcD+0H0YMI21m/axuUQpf38e/rX32YIbiYwARgCrkpptxMw0X99RUq7GcAU//VvMuxfRKSdAR/7+3HcBx37R9kOmAN2HERJT3zu9K8vTtmP1IwSGKmCR4CngbWB9XrfnE0EdgE+3K6l94amOK7vYEfTcCcKgNMgOr6DdRv2AMb7r1NOTrYpsE37dsMaJ8Y/QvR4SrvX+tdFEN2b0u4Q/3p9n+ZoEBEJeezv9rgPOvY3WweY5LeXlMA85V8tZT9SM5rIUiogMmC+/2YS2KQeN3g88C2/3Sz29a+XQTTUwX6a+yLfkNgq3ZZNX9+W0q5xB26IbLNLv86/pt1Zg5GTWMo2bRzQ6GpyToZ9i4hkEPTY3+1xH3Tsb/Zx/++jKfua5l8LMGCQFIUSGKmKnzZ9/frEVm3ZbsBWEJ2dsf1auDthABd3uLNlTV8v6nDdhg2avk47uDdOYtdC1DQcp71rzab2HEYe1aecxGwtYNf27ZgFTAZWMervZM8HOyxlPRGRdgIc+3s67oOO/U3H/ugeiE6BKGZ+mmEzM+xLakYJjFTFuUBjiMaU8fTT2HbAF4AjOljpNbhH4AZ0MEEjAAubvl6dbRV7LqOH5LzLvz4G0f0J60wA9vbfLBi9reHH+80aJ+angd+nBLMrruvGauDKlHYH+NdLIHqoafl7gZUp64mItBPi2N/LcR907O/g2G+b436GO4CEoaSljpTASEVEQ8BhuL6yr3Ijm3TC/gP4BvCWlgNtO7P86x8hWtzZPqPFjBQnxgyR2cq2By4BVjQtzPJI/QPAc/zXzV0N3g5cFNN+pn+9BqIVMe83NO7s3dymgHQv//qTkUU2Hne39Ocx7UVEMgpy7O/huA869nd07P8SrvvbkV101RMRSVKEYZSb2d5gj/lhDk/wB8u09jvhJhM70xdwdrq/m/2+ju8qXBdvhiEZbV8/pGbcRJiX+228IOa9w8B+CvaIb7ND03vXEzsE8PDM0G0uBOxa2s5+bWP8Z8Rct4Hh5Z8Be3/69gENoyxSQEUZRrnZII/9vR73Qcf+LGwvv37G9oOgYZRFKqJoCQy4A/3wBFv/BPsC2FvAtnWPo20vsI/6C9DrwbrsN22TwVb7/byyh3iP97/Ht7cs3wDsALB5uJmY149dHXsJ2ANgP8cVTeLa2olg3/Ankq8walx/+yCxcxfYJozMq/CylJjXwc09YCTOoDzc9qe+nZ/R2d4B9r30dYbXVQIjUjhFTGBgMMf+fh33Qcf+1HWn+p/tmGztB0UJjEhFFDGBabBXgH0N7I+4ibRW+gPijWBfxxVt9rL9yWD3gJ0LFvW4rd3A5uJmer4Md2ftN2BHu5NZ2/U3gf9v745ZqgrjOI7/rlpgQ9ISTUW6BDUELUHvoLagtfcgTSU0BNHY0BA0NdcQjQVBUxFIW0FD0djUELQE9jRcjQjTQL3P+Xs/H7iI3Cv8jsiR770cTnuYtFdJe57xDc4u/fH8zPifVnuXtGcZv0O5yebfx/R462NqhzK+O/KL7Y+9HVn/Hb1N2suk3c74ItD/IGBgeIYaMBv28ty/m+f9xLl/05+bT9qbpN3Y/rWTJmBgnxhywFCfgIHhGXrAUFebSdqTpN3svWRzAmYoXMQPAMAQ3E3yPhnd6j2EYRMwAAB01paT/EhGK/94/vpE5zBoc70HAAAwzdrlJMeT0fIWLzoxqTUMn09ggCEb/fUVgH2lnU9yYet4aWeTfJnUIobPJzDAQLXZJKfXvzmTZLXjGAB2XVtK8jTJ56S93uQFs0kWkiwmuTrBYQycgAEGpC1lfNfmg0mOJTmc5HuS+0lbSfItyadkdKXfRgB2yb0kR9cf2/mwx1soRMAAAzL6mGSHN4cDoIbRxd4LqMk1MAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBkgZwwIAAADRSURBVAAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUMdd7APvCfNLO9R4BE7bQewB0tujcz5Q51XsAYwKGnVpLsphktfcQ6GSt9wCYsI2/+TvrD5g2P3sPmHYChp26lsQ7cEyrT8noa+8RMGGPkpxMcqD3EOigJXnQe8S0+wWITlA6GPOniQAAAABJRU5ErkJggg==) # $B = \left( \begin{array}{cc} .5 & .5 \\ .2 & .8 \\ .1 & .9 \end{array} \right)$ # # Can we figure out what's happening just from a sequence of observations? # # HMM's in Python # First, we will load some Python modules from hmmlearn import base, hmm # Module for HMMs from matplotlib import pyplot # A plotling module similar to MatLab's plot import numpy # A package for arrays, matrices and linear algebra from math import * # Math might help model = hmm.CategoricalHMM(n_components=3) # Create a HMM with 3 internal states model.n_features = 2 # Number of observed states model.startprob_ = numpy.array([0.350, 0.375, 0.275]) model.transmat_ = numpy.array([[0.6, 0.3, 0.1], [0.3, 0.5, 0.2], [0.1, 0.3, 0.6]]) model.emissionprob_ = numpy.array([[0.5, 0.5], [0.2, 0.8], [0.1, 0.9]]) print(model.startprob_) print(model.transmat_) print(model.emissionprob_) observations = "ININNINNNN" # Convert observations to a column vector of 0's and 1's obsSequence = numpy.array([["IN".find(c)] for c in observations]) def alpha(k): alphatilde = numpy.multiply( model.startprob_, numpy.transpose(model.emissionprob_[:, obsSequence[1]]) )[0] alpha = numpy.divide(alphatilde, sum(alphatilde)) for j in range(1, k + 1): alphatilde = numpy.multiply( numpy.dot(alpha, model.transmat_), numpy.transpose(model.emissionprob_[:, obsSequence[j]]), )[0] alpha = numpy.divide(alphatilde, sum(alphatilde)) return alpha filterResults = numpy.array([alpha(j) for j in range(len(observations))]) print(filterResults) f_hot = filterResults[:, 0] f_warm = filterResults[:, 1] f_cold = filterResults[:, 2] ind = [i for i, _ in enumerate(observations)] pyplot.bar(ind, f_hot, color="red", label="Hot", bottom=f_warm + f_cold) pyplot.bar(ind, f_warm, color="yellow", label="Warm", bottom=f_cold) pyplot.bar(ind, f_cold, color="blue", label="Cold") pyplot.legend(loc="upper left", bbox_to_anchor=(1.05, 1)) pyplot.xticks(ind, list(observations)) print("Filtered values only use the past to explain the current") filterMaxs = numpy.argmax(filterResults, axis=1) print(filterMaxs) print("".join(["HWC"[x] for x in filterMaxs])) # Find the probability of the internal states at each point in time smoothingResults = model.predict_proba(obsSequence) print(smoothingResults) s_hot = smoothingResults[:, 0] s_warm = smoothingResults[:, 1] s_cold = smoothingResults[:, 2] ind = [i for i, _ in enumerate(observations)] pyplot.bar(ind, s_hot, color="red", label="Hot", bottom=s_warm + s_cold) pyplot.bar(ind, s_warm, color="yellow", label="Warm", bottom=s_cold) pyplot.bar(ind, s_cold, color="blue", label="Cold") pyplot.legend(loc="upper left", bbox_to_anchor=(1.05, 1)) _ = pyplot.xticks(ind, list(observations)) smoothingMaxs = numpy.argmax(smoothingResults, axis=1) print(smoothingMaxs) print("".join(["HWC"[x] for x in smoothingMaxs])) # # Viterbi algorithm # Finds the most likely sequence of states that explains the observations. # **Idea** Consider partial paths (dynamic programming again!). These paths consider both past and future observations. logProb, viterbi = model.decode(obsSequence) print(exp(logProb)) print(viterbi) print("".join(["HWC"[x] for x in viterbi])) # # Comparison of results pyplot.subplot(1, 2, 1) pyplot.bar(ind, f_hot, color="red", label="Hot", bottom=f_warm + f_cold) pyplot.bar(ind, f_warm, color="yellow", label="Warm", bottom=f_cold) pyplot.bar(ind, f_cold, color="blue", label="Cold") pyplot.xticks(ind, list(observations)) pyplot.title("Filtering") pyplot.subplot(1, 2, 2) pyplot.bar(ind, s_hot, color="red", label="Hot", bottom=s_warm + s_cold) pyplot.bar(ind, s_warm, color="yellow", label="Warm", bottom=s_cold) pyplot.bar(ind, s_cold, color="blue", label="Cold") pyplot.legend(loc="upper left", bbox_to_anchor=(1.05, 1)) pyplot.xticks(ind, list(observations)) pyplot.title("Smoothig") print("Observations: ", observations) print("Filtering most likely:", "".join(["HWC"[x] for x in filterMaxs])) print("Smoothing most likely:", "".join(["HWC"[x] for x in smoothingMaxs])) print("Most likely sequence: ", "".join(["HWC"[x] for x in viterbi])) # # Learning the HMM using Baum-Welch # 1. Start with random transition and observation matrices. # 2. Fix your transition matrix and find the observation (emmision) matrix that best describes our observations. # 3. Fix you observation matrix and find the observation matrix that best descibes your observations. # Repeat steps 2 and 3 many times to improve you guess. learnedModel = hmm.CategoricalHMM(n_components=3) # Still has 3 internal states learnedModel.n_features = 2 # And 2 observed features learnedModel.n_iter = 10000 learnedModel.tol = 0.01 learnedModel.verbose = False class ThresholdMonitor(base.ConvergenceMonitor): @property def converged(self): return self.iter == self.n_iter or self.history[-1] >= self.tol learnedModel.monitor_ = ThresholdMonitor( learnedModel.n_iter, learnedModel.tol, learnedModel.verbose ) learnedModel.fit(obsSequence) # Create a longer sequence of observations from our original model longSequence = numpy.transpose(model.sample(1000)[0]) # Create a longer sequence of observations from our original model x = learnedModel.fit(longSequence) print("Original and learned transition probabilities") print(model.transmat_) print(learnedModel.transmat_) print("Original and learned observation probabilities") print(model.emissionprob_) print(learnedModel.emissionprob_)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730677.ipynb
null
null
[{"Id": 129730677, "ScriptId": 37441718, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:54:55", "VersionNumber": 1.0, "Title": "NLP 5", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 158.0, "LinesInsertedFromPrevious": 158.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
# # Making Decision with Hidden Information: Hidden Markov Models # # Markov Models # ![markov.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAgMAAAGQCAYAAAAzwWMnAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3dd5xcZdn/8c9uet8UkhBIowQIhN5BCEWagDyKiIIIAhYsdOyKiA/IY8HnQbH8FBAQkCJdQJCO9ICoVOlICakESEJCfn98z2RnZ2dmZ3fPnPuU7/v1mtfsTr12d/Y697nLdffBzIpsJPAA0AaMADYBDgXWBe4PF5aZmZklZRzwCPBv4G1gFmoMmJmZWUGMAz4eOggzC6s1dABmZmYWlhsDZmZmBdc3dABmFtwo4NPAUnSCMBn4IbA8ZFBmZmaWjJHAGXTsJTwM+H2YcMzMzCwN+gLvomWGZlYAnjNgZpWWAS8DO4UOxMyS4caAWXG1ACeg5YWV3gAmJRuOmYXixoBZcY0DTgXWqXLfCODJZMMxMzOzEL5c5bbJaM7AWgnHYmaB9AkdgJklZijwGDAGuC26bSJaUfBS9H0f4GzgIuCahOMzs0BcZ8CsWFrpeBJwOfAR4CRgABo6+CNwRfKhmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmVlTtYQOwMyargXtQDgNbUq0BjAElSEegiqRLgDeAhah7YufiC7PoS2NzSzH3Bgwy6f1gF2AXYGpwLPAU2gnwn+jA/98dPBfBrQBw9D+BZOAdVHDYa3ocX8FbgYeApYn+HOYWQLcGDDLj9HAp4CD0Bn9zcAt6ODfG2NRw2IXYHPgWuB3qIFhZmZmKTAZOBe4F/giOstvlgHAx4EbgeuBLZr4XmZmZtaFkcDpwIPA3gHefz3gErTL4VoB3t/MzKzQdgVmAQejbYlDmol6Jb4QOA4zM7NC6AOcDNwErBo2lA4GAD8DrkA9FmZmZtYE/YGrgG8Qvjeglo8C96FJh2ZmZhajPsDFaIJg2m0HPACMCx2ImZlZXrSghsDRoQPphl2Be1DtAjMzM+ulE4EzQgfRAwcCF4UOwszMLOvWB+5EJYOz6DeoUWBmZmY9dDOwUeggeqENLYEcGjoQM6utT+gAzKymD6J9An4dOpBeWIxWQWwH3B04FjMzs8z5M7B26CBiMBB4hOwOdZjlXlrXKpsV3SSgH/B06EBisBj4C7BX6EDMrDo3BszS6cPAZaGDiNGlwH6hgzAzM8uSq4GpoYOIUSvwaOggzKw69wyYpdNE4LnQQcTofeBVYHzoQMysMzcGzNJnBDAvdBBN8HdUN8HMUsaNAbP0WQ14OXQQTfAq6dpp0cwibgyYpc9oYE7oIJpgDjAqdBBm1pkbA2bp0wosDx1EEyzHhc7MUsmNAbP0WU4+C/TktZFjlnluDJilz1w0VJA3Y8jn8IdZ5rkxYJY+r5PPJXjjgNmhgzCzztwYMEufvE60mwY8FToIM+vMjQGzdFoEDAsdRMwmAy+GDsLMOnNjwCydHgC2Dh1EjEYB81ElQjNLGTcGzNLpbuADoYOI0QfQz2RmZmYNGkm+Dp6/BHYIHYSZmVnW3IQ2LMq6VuAx8lk7wSwXXA3MLL2GAjOAe0MH0kvboxoDV4YOxMzMLGvagPtCBxGDc1CDwMzMzHrgD8AWoYPoheHAQ0BL6EDMzMyyajvgwtBB9MIJwJdDB2FmZpZ1t6OCPVnTD3gEzX0wsxTzBEKz9JsPHAz8OXQg3XQI8CZwQ+hAzMzMsq4FuBOYGjqQLvRFcxwmol6Bh8jnHgtmZmZB7AH8LnQQXdgFeAt4A/g18I2w4ZiZmeXPTcAmoYOo41JgRXRZBlwA9A8akZmZWc5MB24jncv0+gCv094YWAG8CzxO+oc3zMzMMuVMNDEvbXYB5tKxMbAC7VL4FtprwczMzGIwHJgFjA0dSIXyIYLyhsAbwIEB4zIzM8ulPYFLQgdRptoQwQJUSnm1gHGZmZnl2u+Bj4YOIrIrqoVQagjMBk4inXMbzMzMcqMNDRdMCh0IcAVqBLwD/BOYFjYcMzOz4tge+CthK4n2BZYAbwOnB47FzHqob+gAzKzH7gJuBNYH/l5x30i0FHEqMAX1IKwCjI4uA4FB0TXAe8Ci6HpOdJkLvAA8H12eBP5T8T7bo8bATqjioJllkMf0zLJvGLA12uFwC2AGKgm8hPaD+Ytokt+b6EC/OLq8G71GP7ShUD/aGwxj0AZJU6LL0Oj5jwIPA3cD90avt6yJP5+ZNZkbA2bZ04oO+ntGl83QeP190eVR1FPwDLA8xvedjBoaM4CtgG3QEsfH0SZKN6A9FBbH+J5mZmYWaQU+APwf6qpfiuYLnIRKFIcaq18H+DxwFSoutAA4H9gHlyI2MzOLxQTgm8Cz6Iz7SuAgVHwobfoDewHnAvPQEsOfAOsFjMnMzCyztgL+iCb0PQx8ARgRNKLuGQB8DG2utBy4FfUWtIYMyszMLAt2R+Pu7wF/QHMDsm4N4KfAQjS/4NN4CaKZmVknM1Ej4B104ExDMaG4tQFfQ8MHTwCfwD0FZmZmrI0m3y0BzkJzBPJuGPAtVMfgfmDbsOGYmZmFMQhV7FsC/AlYK2w4QYwGfo5WRlxA+nZiNDMza5qdgKdRRb9dAseSBhugIZI30XwCMzOz3BqEhgKWAqdF35u0oBUTC4BrgXFhwzEzM4vfxmhHvyeAzQPHkmYT0TLE14EPBY7FzMwsNoeiPQB+BQwJG0omtKJVB0uAH+BliGZmlmH9gF+g7X0PDhxLFm2PSi/fCIwKHIuZmVm3DUGb9zyHhgisZ1ZFuyP+Cw0hmJmZZcJIdAD7B7B64FjyYABwKeol2DBwLGZmZl2agLYP/htaR2/x6AP8GhUq2j5wLGZmZjWtB7wIXI2XDTZDC3AymoPhlQZmZpY6a6HlcL/Fs9+b7Ri00mD30IGYmZmVjAWeQrsMeuOdZJyIegi8r4GZmQU3HHgYuBlNdLPk/AiVMF43dCBmZlZcg4Db0c57QwPHUkQtwO+Al8jnls9mZpZyrcAVwOPAmMCxFFk/4HrgMbQ1spmZWWK+iyYMTg4diDEEeBTN2TAzM0vEbmjnwd1CB2IrrY12PDwydCBmZpZ/q6FJayeGDsQ6+STwDq5SaGZmTdSKVg3chCavWfr8Gi3zHB46EDMzy6djgdlo8xxLp0Fo/sA5oQMxM7P8WQMVudk/dCDWpQ1QhcKdQgdiZmb50YKGBq4JHYg17AzgSVwIyszMYnIQMB/tSGjZMBRtGvXV0IGYmVn2DQJeAI4LHYh12/5oaGdK4DjMzCzjvodmp/cPHYj1yA3A5aGDMDOz7BoLvAXsFzoQ67FpwHvAFqEDMTOzbPox8CCuKZB1FwB/Ch2EmZllz3hUzW6P0IFYr00HlgEzQgdiZmbZ8gO0NbHlw2XARaGDMDOz7BiM9h84IHQgFpuN0dyBaaEDMTOzbDgKeA7oGzoQi9V1wG9CB2FmZtkwCxeryaPdgUV4EyMzM+vCVqg72ZsR5U8r6vH5XOhAzMws3X6Fi9Tk2beB+0IHYWZm6dUPTRz8r9CBWNNMBd4H1godiJmZpdOewEK0H4Hl1wPA10MHYZYmraEDMEuRjwLXAu+GDsSa6o9oEyMzM7MOWoBXgANDB2JNty4aKvAkUTMz62BjVLJ2dOhALBH/Bj4dOgiztPAwgZnshsaS54QOxBJxE6o7YGa4MWBW8gHg1tBBWGJuA3YIHYSZmaVHCzAb2Dt0IJaY1YAVwOTQgZilgWuvF8O+aEz8faAN6AOcAswLGVSKTENzBf4WOhBLzCvA88C2wAthQykE5yCzwPYBtq647cfA3QFiSatPoAllViyXAz8MHUQBOAdlgOcM5N/hwBcrbrsenRGtk3w4qTQDeCx0EJa4x9Df3prLOSgD3BjIv3NQIZ1ypQp7njkvG9LcxsBRwIOoS3QB6qK+GZhS9ph90JnSXGB+9Lj7gB0rXmtb4Em0+95C4BFgu+aFnmuPob+9NZdzkFlKXQycHzqIFHkSOLjJ79EHHbxXAGvUedzfo8fsW+cxrcCzwMdji66YZqAx7MGhAykg56CUcc9AsYwCTkRnqJ8NHEtatACT0GSyZloO/Cf6emmdxz0ZXb9X5zFrABcBl8QQV5E9h/7+XlGQHOeglPJqguLYDZVh7Qfcievvl6wKDKT5jQFo7xIdDbxc5f5+tI9hj6rzOl8CvhZjXEW1CO1SORV4PHAsReAcZJYyPwfODh1ESmyOztr7JPBeV6MhgJ1r3P9l4IroMV+u8Zg9gP3iD62wHgaODB1EATkHpYyHCYrpdNRFt1voQFJgNJq0tzyB9yrvGag0DhgB3BN9X61noB+wF3Bl/KEV1pvAmNBBFJBzUMp4mKCYXgJeBT6CarTnUQv6fNcbewcdCJKa0VyvMfAVlCBLW+tWawwcBfyiwffaMnrNF4DhwAWo8d8XddGWDAFORePm90cxbAl8DHgHGA98F3gtenwb8C1gAvAM8B001PIF1KMxBHUFfwkNvWwGfBj9HaYCi4FjgSUN/hzNNofGNqdqQb1Hy5obTmEUIQeZpcbawOvAzCr3/RO4LtFokrUb8BZwDfAhoH+Nxx0F3JtQTF9HB8xvVty+BXBQ9PW+0WMuqHjMeHTgbcQBaFVC+Ra9Z6Lke07FY7+BGh7rRO/7qeixrcBW6Mz562WPPw01LtaOHv9R4KeoV6PkWuBGtFzypLLb+6Alkyc0+HMk4RfA/6txXwtayvk71Gj4flJB5UiRc1CmuGcg3wbR3gIv1wedpV2ReETJ6Y8OVnujTYiWALNQ8r+R9jPTgehsNQnVegZagANpP0DOja4rewaORWfwXdkbuBCd3Zf/3e8Ajo6uy42I3nPd6PsjgF3Qkrs1UM2DUsLuG8W7EM21KMW1Kx1/hy8Bn0G7QH6r7Pbl6MCwWZW4d45ec3VgTbRp1Jk0/0x8MTCs4rb10e/ho8AAYBX0c3sJYvcVOQeZBTEULUs7uey2FpSMV6147JdRF+7IJAILZG/a1/WXX+ajA9IN6Cz82yTXTfnRKIbzym77FLBp2ffrRY8p763YhsbqIAxAZZWrzSk4ms41DjaLYgL1kKyI3quWLVBvAcDx0eM3qvK4W9HveHjF7cNRI+O7FbfvjOZClAwFbgEurRNLXE6P3md91MPxIor9fTp/dn6cQDxZ5hyUYe4ZyJdWOs6KXwGcARyCNuN5G53dDEOJPcubhAxAiaR0GVX29TC0KcqAKs8rdWfvDnwQJau/NzvYSGXPwDB0Fnx+lceUegZaUc/BMQ28/hfQwb7yYAs64L6MihWVPIp6S0Dd4bOpP2TyQNnX26N1+o9WPKYVNW4uQ42xcpui33fle2wN/HfZ94vQz/Ik6nW4uU5MvTUMjVt/hK4nVH8ADdW8hf53Spe5ZV+/07RIs6FIOShX3BjIj0VofK7SUmqPiabREDSZbTIqBlS6nhjdV7KUjkm4/OsX0EHnQzXeY1H0/Aej5zUygSwOlY2BLwFnVTymlBxLjYFDUU/CigZef2b0uMrSr63ADqgefLnyLvhd0Rl9I+/TGr1XtS7ejVAPwK1V7tsLTSS8r+y24WiC4ZXAv8pufwpNWtyW5jYGFqAy0P8G9kQ5cRT6/FR6AS1FHB49ZiKdG6LlQwmLUaGpF6PL82VfzyV/8pKDCsmNAQthddQdvh46M56MZqf3QWcOz9OeNO8o+3pRN95jOR0PdvNRUirNG7gh+v4EOnZRN1N5Y2Aampz3ZsVj3kNn1KWDzFQ0ga0Rm6Lf3fyK2zdBqwD+WuN5M9DSxmoH8GpKr1ft8aW9FCrva0Hlk6+viG8Z+tuvS8fGAOhg2uxx+v6ox+Sw6Pta8wVAn8HKhlY9g9DPNim6bIIaPpNp7x5/lfaGwuPod/A8ySx1NVvJjQFrpsFoI5hNo8v6KMG+gpLeE2i89gV0BhV3AmwF3kBr93+JDoaVSw2XUH04oRlKZ4Oj0cHnW3UeNxx195/SjdcfTfUDfqnIUem+o4GfVbn/Lw2+z07R9e017nsK/Y3LbYMOiMdH36+JehGuQGfYlUZFj7+vyn1xGkDHZY7/RJMij0Pd2Iej+SfDaazXpNy7qMeh3vbYq6Kfc2r0foegoZ7lwNOo8fpwdKls5JnFxo0Bi0sLOtPfBu2iNwMdeP+OEtkvUaJNqgTp39A48F+pPyP9LTpPdGuWxWhMeRTwZ2o3fuagHQ2fpHs1EJ6g874HA1Bxl4VojL8VLVMstxNqkNU7aJWbGT32pYrb+6CegYurPGd39PPeEH1/NJq8WcsR6PNyVYMx9VRpNUWlFajuwv3R91ugz0rcXo0ulY2efqj3aFPUGPkO+pw+hT7b96AdK7uqo2Fm1lStqNvzBLT07BHgD2iW8GZkp6G5N+0bCCXhRfR7qudG9PvsbonkQ9BM+NJEuMFoBvx5tHfB74F+5pIWdDBsdEy3Fc1r+E2V+zZDB9GPVbnvR7RP1FyD+g2Baaig0dQGY+qN67qIJU1a0HDKYejv9QAajjkZzQmpVUvDzCxWq6I67peiWeTnoQPQaiGD6qVt0Nl0tQljzXADmjNRz1lohn1PHA38Gg1BfBeN7Y8G/ogOesdXPH4UGkrZocHXH4VWHVTbX2Ev1NgZWuW+1dCB67voLLfWzP1+aMJgtSWLzXAfWlaZVaNQD9jPgYdQj9PxqEFlZhabjdFB5B6UpI9FwwF5sSY6m01qRYHVdxb1ax3E7XnaS0DnweponsOf0PDc/6CGnvehMbNumw58DyWTi9A697agETVPPzSnYNOuHmhN91W0lr/cKk18v9LfvlpFxDwYiJZLno2GnX6Glmom1QtmZhk0Fs2gvg/N8P44xSm/+iLqarVwDqR9lUJJK7W3co5DkXqF+qD9On6LGganAmsFjchSJSuTvKw5WtGZw2fQDPPzUcJYEDKoAJ7FiTGkD6L5KI+juQItqELdhmhCZLOsiT7rSe1aGdJyVHb7JrTC5MPA/6FJh+eiipFJrfQxs5QYgcb+Z6Guww3DhhPcWXQsCWzJGUv1PSRKl2ZOJDwBuKuJr58Fk9CEzkdRSegsTwY2swZNQWcDD6Pu18rd2orqc3SusW/5dx6qRmmaP/EJ4E40T8hzaMxyaG1U0vZOtFOfZxZ3tDWqQjcwdCCWqEfI9rLCZtkaFXu6hmRXdphZk0wFLkBV+D4YOJY0G4DGS7cLHYglZjhaSbBx6EBSbCNUn+JGYMvAsZhZD4xBe7PfA+wSOJasuBs4KXQQlpjd0FyF7lZ5LKINUC/BJXiirVkm9EPVx2ah8T+vJ27cGcDVoYOwxJyCZtZb43ZEQ41noknIZpZCO6JNVU7GY989sRvaIjmpHQwtrPtwT1BP7YP2RTgEn3CYpcYY4EJUKGhy4FiybADala5azX3LlzFo3f2M0IFk2Ai0LPkWNEHZzALaD82I/nDoQHLiWrTLn+XbwcDL+Kw2DpuhXpaj8Sols8SNQKsE/kAxSqkm5XBUmtgHiXy7Cvjf0EHkyABUsOhWVMvEzBKwBSoadEDoQHJoJKo30NPtgy39huNlpM2yFcpN3ufDrIlaUAnhu4A1AseSZ9eh2dKWT4cCL+Eu7WZpAy5HlU49GdcsZoPQkMDP0MYi1jwHAG/iFRl5dSfwg9BBFMCXgduBcaEDMcuLVYE70M6C1nz9gTfQlrqWL+ugVQQunJOM7dGwweahAzHLug1RAaHtQwdSMD9CZzWWLz9FS+EsOWui1Qb7hA7ELKu2Ah7Ea3hDmAK8h/4Glg8jUflhH5SS14b2RzkidCBmWbMPcC8aIrAwLkFbuVo+fB14Ak8cDGUQWtJ5QuhAzLLiv1AXdVvoQApuC9Q74J6Z7BsMvIrPTEPrC5wPfDN0IGZptwfaaXBk6EAM0DLD80MHYb12AvA8XomTBn3Q/9Q3Qgdilla7o2103SOQHpuh3oHpoQOxHhuKVoccFjoQW6kvGoY7JnQgZmmzOdoFzKWF0+cytJe7ZdMPgX+hA5ClR1+0F8gnQgdilhZT0fLBKYHjKKpNqL/b41RUvnaPZMKxGE1H5aXr7UQ5DNglmXCswmBUQ+WDoQMxC60NLR/cNHQgBdWKhma62vr5NHR26THn7GhBG+f8oYvH9UeFcVwpL4xx6Pe/XuhAzEJpRd3P+4cOpMAORwf6rgwFXgC+09xwLEYHAQtobHnuAcBZzQ3H6lgXDZOOCB2IWQgnA6eGDqLABgEP0XgC2hNYjCcTZsFw4BXg6G485zb8tw1pP+BKvH24FcwewPW4AEpIpwJHdfM5F6JiUJ6Mlm7nonk43fk7bYsmtFk4PwS+FjoIs6SsAjyCxyhD2gQVdurTzee1oeGCU2KPyOJyKLCInp3l/yJ6voXRF00odBlwy70WVJLzQ6EDKbABaOOUaT18/k7AUmCH2CKyuExHDYFP9/D5Q9CE3kmxRWTdtQaaUDgsdCBmzfQ54MzQQRTcD4Gv9PI1TkXlbSf0PhyLyWDgH8A5vXyd7YAb8dh1SIcCZ4cOwqxZVkXjmENCB1JguwB/ofdzNVrRnI+/4eWGaXEOagwMjuG1foqr44V2FfXrQ5hl1uW4uEZIk4FHie9sfjTwLKqz7rPIsI4H3gLWj+n1+gG3ADNjej3rvklotc/A0IGYxenDwHmhgyiwQcBdwJYxv+5awGt4jXpIn0RLPuOuEDkONR6nxvy61rjjcG0Py5EBaELM2NCBFFQLcAHwmSa9/uborPSkJr2+1bYragj0dMJgV7ZBjUifnYbRB7gfmBg6ELM4nIgPFCGdBvykye+xMzooNavBYZ1tgRphxzX5fQ5DxXBcWyKMXYDfhw7CrLdG4XGvkL6CxvSTKO70SbQpzkcSeK+i2xCYjVaGJOHLJPc5ss6uR40/s8z6H1Qj3ZJ3CJqRnOQZ3edQg+DIBN+zaHYA5gE/J9mJmz/Ac0NCmYGWe5pl0hg0V6C7Ve6s9z4B3IwmDiZtP7Tl8ekB3jvvPgy8Q7jf7S9Qo8CSdymwfeggzHridODg0EEU0OdRQyBkBbOZwHx0Jumu5XgciuZlfD5gDC1o/skv8N81aeuj4QKzTBmJ5gq4VyBZX0WTvdIwR2MTtOzwYuIphFNUrWgviHdJz3yMr6JNq/qFDqRgLif+5cFmTfVVwp7BFE0L8CPg/5GuBtiawD9RZTxvj9t9Y9FY8evAjoFjqXQsmpMSYiiqqLYB/hA6CLNG9UXFSlx2OBlD0RnDqaSzEuAg4GdorPvowLFkyQ7AK2h3ydUCx1LLp4C7gdVDB1Igd+Hft2XEASS35KnoJqJk/KnQgTTgELSj3u9xQ7GeFtRoWoIaUWnvit8a+DsuXZyUg1DD3yz1bgamhA6iAD6AVmtkaf3xdDRk8Dg+eFSzHnArmmuRpX08JgB3AEeEDqQA+qOe17Q3Eq3gJqNd8ax5+gDfRBvJjAscS08MBs4AlqIyyePDhpMKQ1ClyCXARWTzdzIQ7T/yWzR0Zc3zY2Cf0EGY1fNdvJywmSainpfTyf6ZwTTgJrQE8WiKW+52H+A54Glg98CxxOFjqMdqm9CB5NgGwBWhgzCrpQWYhZeRNctHUJL9QOhAYtSC5ju8hj47HyKdkyCbYSO0bvxt4Buo+zcvJgO3ASfjegTNcg+wSuggzKrZAm+o0QyjUffrH4ARgWNplja0NHIRahTsT34PIhuganLL0JDAlKDRNE8/VK3wZrTVtcXrWOCzoYMwq+Z0YN/QQeTMx9BkoUNCB5KQMWim9Hw0yfAQ8jN8MBO4FjUCLkONgiLYBC2HO5l89X6ENhH4c+ggzKp5EBcgicsENCb4e7TzY9GMAL6FduZ7Ec1FyeKe7m3AF4FHUCnh3wDrBo0ojL5oXsjfgK0Cx5In91DM/GAptgFwSeggcmAgqt74ICo6U3RDUFfo/eiM+kbgM6Q7AQ5C8zsuRoWWnga+RjZXfsRtbTRscBYa/rLe+RrZqDFiBXIC8OnQQWTcPmhI4GTSsbdA2myItsR+Hi3BuxGNm6bhTHs14HA0F2Ah8CbqBZhJcSZEdsc+aO+SrwIDAseSZRui5blmqXEjLpHZU5uhmgG/wmePjWhBVe/+G62ueB94CU3G+xKwOc0druqLiicdivaC+BewAngGnfHuQfaXfSahPxo6eBDNjbHua0EnEHmdbJsot9p7byCqPubdtLpnXTQWPgw4CR1UrPvGAzsB26L93mdEtz+Dqh0+F11eAN5A8xDmAG/VeL3BqAt7NGqcTUQz/tdAf7P10YHsKTRmexf6/D8d609VHOOB76Pf78nAnUGjyZ7zUNnqh0MHknV5bgyMRGcwU1Eym4TWpZYS3UB0BlXqkn4PLe16DyXLOcBclESfjy5PAv+peJ8paH34z5vzY+TOVDTWtw7amvavCb3vaNSVPQVYjg56g1HXe54SyUB0wJ6ByvtOjS6lz3/l//wC1LvQVuW+eajX4XnUoHgceAw1MhY2JfriWhfVW5gEfA+VZbau7YFy9i0VtyeV/3MjL42BYajrdDu03n8GOqNZQvsf80W0Feqb6A+9OLq8G71GP1RGtB/tH5gxqIDIlOgyNHr+o+gAcjeaIfxGM3+4nJgCfB0lve+jiVRJGQUcA/wELdsrORiVj/1QwvGEVPpsD4u+H4G6WUuNgndQEpyDJi1asqajxvJk4Dto10arz/m/wFrREp2TgftQ0lqI9gU4FY3BrUP8e9tPBvZGB7Ur0YdrBeri/jHaXMWT3zraHI1n/xXYJVAMXwCeRWfK5fqg7vLKswqz0GagCZk3AXuSnxO3ODj/F1wrKkX7f6irZik6wJyECnvE/Ydv1DrA54Gr0IFlAXA+mjFc1CIjLcCu6HdyNRrPDmlf1HrfpMp9b6Cxb7N6dka55n+B69AKoiQKQq2BxsQfQRMOi1rLxPnfmIB2qXsWdetcifa0Hh4yqBr6A3sB56Lx1tmoa7ryjDSvBqN18Q+if9o1wobTpdGoa/x/QwdiqbYz+r8uGYp6ky5NMIYxwLdRo+Ab0fdF4PxvbAX8EU3oeBh19WapNv0A1F11E5qwditqLeZxGcz66IA6C413prkoToxea8gAACAASURBVLkT0Rj55NCBWKp9o8pt01AX8a4JxzIQOBKNVV9IfotzOf8bu6PlNe+hzWm2CBtOLNYAforGtR5HBYpCdWvFpT/6sN+IPvAfI1t19CcA/0bV8sxqGY7GpadXue9VNMkvlM1QfY6HUQGjPFQ1dP43ZqIPwTvoFzcpaDTN0YbOnGcDTwCfIHstxU3R3+dRtCogS3+nwWjc9Vdo74NqCd6s3GC0vLJao/E5tEFZaCPR5/ohtO5+F7KXV2bi/F94a6PJF0tQBbMJYcNJxDC0Ec1cVHc+9AS7rqxG+0Yr16BegCxXmRuEunf/AuwXOBbLplGo+/e/QgdSYTM04XBWdF1t0myaOP+nP/833SDUql4C/Ili7vk9GhUrWopqbI8NG04Hbajk7F/QrN3PkK0xu0asjiYlfT50IJY5JwF/J71ndgNQQ+VPqBF/DGrUp4Xzf7rzf2J2QuVLnyTc2vM02QB1kb1JczY82oDGut1Go4P+dSiBfL3B52XZRWhJ0KqhA7HMmIbKPU8NHUiDRqEG782oiNGxNPZ/PRjYsQnxOP931Oz8n0qDUFfQUuA0irtmtpoWNGN2AXAt8WzaMxA4Ex3sapVLHgUcgrr/Z6FCHtNieO80qTex8SQ0K/zAhGKxbOuHDqobhQ6kh7rz//4xVKHvz8Rz1ur8X1sz8n9qbQz8E02c2DxwLGk2ES1DeR2Vye2pHdAEp7fRwe7Jsvs2RBNZbqf9TCGvy+sGoSIlf6px/5Ho91Nt+ZhZpbOAbUIHEZPynsD70AF6e9pnuv8B/W8sQ5PevkzPKyE6/zcmrvyfWoeiFuavgCFhQ8mEVnSwXgL8gO4tQ2lDM4pno3/k0mUecDb6h7wGFQcqQtf4GJTMzq1x/w/Q72fvpAKyzPoqqoJXbpUQgTRBaVJtqdrhH1EN//IcMj+6b51uvvahOP93R2/yf2r1A36Bzk4PDhxLFm2PzmpvpLFCPvsBL6M1uisqLotJroRqKCPROuXKn/E31F79cC/ayGRo88KyHDgQjXWXa0Vny3nTgqouLqJzHlmBTjTOoOtyu87/vdPd/J9aQ9BY03Ooi8h6ZlW0O9a/UBdSNROix5SGBGpd8r7N8g3o5zyy4vapqEFQ2RNyLBqj2775oVkGDEXDaSdX3P5B9FnZKLpsjHoIvggckGB8STqA2o2BFWjc/2VqD5k4/8ejkfyfaiPRD/APtHzLemcAqoP+HzTeX25dtN55GWoMvIPq7Vf7B36SfPsq2qq0WoJqA05BBU1+jjYT+S3pWm5lYQ1Fs9y/X3bbWFQ5rtZBMasTCbtSmi9Q7fIuyjVLo+8Pq3iu83+86uX/2DRjW8wJ6AztbTQOO6cJ71FEfdCY//5oF767ottbgPGoG6lyH+4J0X1jovuXocaDmVk9V6NtlOehJW+vofLLL6FhgjdQbp8b3bcsep7zf3PUyv+ptR46M7saLxtphhbUhfk2OZ1pamaZ5fzfXJnJ/2uh5RC/JSezH1PsGDTTdPfQgViuHA4cFzoIyyTn/+SkOv+PBZ5C40xpLc+ZNyeiFmLh61pbr6yK1s7/BC0b+2bYcCyDnP+Tl8r8PxxtoXkzmuhgyfkRGs/zPACLw6W4MWDd4/wfTqry/yBUwe5+vE47hBbgd2hST973ELDmc2PAusP5P6zU5P9WtCf842i2uoXRD7geeAxtjWnWU24MWKOc/9MhFfn/u2jCSF5r2mfJEOBRNGZn1lNuDFijnP/TI2j+3w0VnNgtxJtbVWujanqV1ffMGuXGgDXC+T99guT/1dCkhROTfFNryCdRBcKmVamyXHNjwLri/J9eieb/VjRr9CaaU73Qeu/XaJnP8NCBWOa4MWD1OP+nX2L5/1hUirIIW99m1SA0fnRO6EAsc9wYsHqc/9Mvkfy/BipysH8z38RisQGqUFW55apZPW4MWC3O/9nR1PzfgrqGrmnGi1tTnIF2KXQhEGuUGwNWjfN/9jQt/x8EzEc7Ulk2DEWbhnw1dCCWGkNRgji5xv3XA99JLBrLCuf/7GlK/h8EvIA3MMmi/VHX3pTAcVg6DAWeBr5fdttwdBZxIUr4z6PxRjcKDJz/syz2/P89NDuxf1wvaIm6Abg8dBBmlknO/9kWW/4fC7wF7BfHi1kQ04D3gC1CB2JmmeL8n32x5f8fAw/iNaVZdwHwp9BBmFmmOP/nQ6/z/3hUzWiPWMKxkKYDy4AZoQMxs0xw/s+PXuf/H6CtKS0fLgMuCh2EmWWC83++9Dj/D0b1pw+INRwLaWM0djQtdCBmlmrO//nT4/x/FPAc0DfuiCyo64DfhA7CzFLN+T+f6ub/WhNDZgEXAz9sRkQWzO5omckEYGHgWIpoADrrGgr0A9qi62HR/YNprxg2jPZk3Ib+V1uir4nuKz2P6HWGdvH+A9G68UYsRWuU63kXWFz2/TuoDCrAInQmAtpa9X1gBaplALCc9s9g6XkLo+csiL5/p8FYLV7O//lUN/9XawxsBdwFTAJebWpolrRW4N/A6cCvAseSdkPRwXZ4dN0WfV36vnTfSNoP7iNpPyivQP9fK6LXa0EHznfRcq1lwDx08FsUPaY7B9DKg3X5c2spvWajSo2QWlqBEWXflzc2utOwGRQ9d3j0uDa0rn0I1X+PpUZIeeNhIfq9ll/Pi74uv31BYz96YTn/51fd/F/tH/1XwBjgo82NywL5NrA3+qfPs1ZgFDpAj6pyKb+9DR2Myi2i40FkPjqQVB5cSgec96LHlB/crTlKjYdhtPewlBpp5dflt5duG1HxWqVG2dyy68pL+e1dNbiyzvk/32rm/8rGQD/UGjwSr0vPq6modTgNeCZwLN0xHC13WiW6jEdFUcZE35cO7KXP9Ps0ltxL35e6t61Y+tKxYdhV47Ff2XMXoM/P7LLL69HlzbLbQpmOyk832oBx/s+/mvm/sjGwJ3AJMA51xVk+PQBcAZwWMIYW9Dkbj/ZHLx3kS1+PQQf71ujxC1GiKk+6b5R9XzrAd6cb3Kw32lADYZWyyzj0uS2/rWQOHT+zr6JGw+vAK9H1shjjeyW6/hFwNh3nd1Tj/F8MVfN/ZWPg/6Gxvk8mFJSFcSJwILBZk15/NDqor4YO9quXXY+jfSz5NdoT4WyUGMsP+LPxwd3yYzTtDYWx6H+h1Ms1Ibrug+aGvA78p+zyKvo/eTW6bwX19Ykeuwrt8yvORQeA+TWe4/xfDFXzf0vF1y8Dx6OZpJZf6wL/Qgfr7k4SGosmF02Mriejg/wElEhWoDP0aknsFdQAcJe8WW19UENhNao3qsdHj3sP/T+9CLwUXV5Euwz2Be6LXqek1Ci4DO1KOafsPuf/4qia/8sbAxujOtTj6PghsXz6N3AKcF7ZbQNpP8hPRAf6SdFlNDrQz6Y9+bwQff0yOtC7a9EsOf1Rvi79j04s+3oasBZqWFRajuY7XAt8A/3vOv8XS6f8X94YOAn4L2CbhIOyMM5GM6s/CVyPksBS2g/0pTOM0vdvhgnTzHrgYOC31N96uDTUcCjqbXD+L47y/A90rDD1AeDWpCOyYG5Du5IB7Eu8E5fMLKz16dgQWI6WwJZqVTyC9rp/EHgMTShz/i+O22jP/x20oO7fvZOMxoJaDZ0ZTA4diJnF7gI0UfApNAfgCGAjqpcYdv4vnpr5fx00a3t00hFZUM8BnwgdhJnFbiiN7y3g/F9MHfJ/aQ33ptEdnjhSLA+jiUNmli+LaHzoz/m/mDrk/1JjYAYaN7JieQz97c2suJz/i6lD/i81BjYkOx+G7wN3oEpe89A6yXuAU8sesyVwE/Bk9Jh56Oe7Ho2VmDyG/vZmVlxZyf9bomqKdwO3AzcCfwU+U/aYvYBDevj6a0ev+zRaerlbg88bCvwFeBz1rhzbw/dPWtX8/yRaipIld6EJEHvWecyB0WOujfm9t0CNkayPsc1AY4WDQwdiZsGkPf/PQCd3jwGfomO+agU+B5yFJki+hfZk6KkhqCTzctp31WxEf+B76HiTleWZnfJ/aUvQ7UNF1AN90ZjYAtp7N6r5Efrj9LSlWMsv0e+s3hreLChttbte6EDMLIi05//PosqJ36H+hMjjaN+/pLfuR/X7u+vcKIZGJ26G1in/T4huWD1URD2wNYr5ui4edz/NWT73T7ROMw9mo641MyueNOf/U1FsRzTw2MGoV+CCXr7nCDTx8oc9eO5LxN8L3Wwr838r+jC8TzwtqqTMjK7vqPOYYbTPkn0hxvcehVpSt8f4miG9hOdRmBVVWvP/54FvAheiDZS68g4a67+ll++7Iyrh3N3XmYYaVH/t5fsnbWX+b0Xj3nPRGElWzIyu6zUGtkV/1Ntifu/tUNdaXhoDb6Ltgs2seNKY/9cCzkR7JnypG8+bTe8bAzujDaDu7ubzdomus9YYWJn/+0ZfZGl9aV90QH4XeKjO43aMrhs5aLcBX0ddTXPRhh+3oFYpqPFxEmoEbIS6kU6MLi8DR3bnB0iZOWR/IqSZ9Uwa8/+ZaJvz/6H2dsvV/AntpVJNVzm+ZGfgXuDtOu+zJfAV1OM8HA1N7IJ+j492I9406JD/j0I/fFaU5gt01QIrrTboar7AZugPuEfZbQPQzNWvVHn8Q2gpY178gsa64cwsf9KW/6egYYslxNdj2WiOXyV675PrvNYBwN/R1tIlZ6IGxqUxxJq0lfm/L9q2dnHQcLpnZnS9CvDnGo8ZiHoPXqD+fIFNUHfQIWjTjpIlwEWopsGvaf/9tKGKTWd0EePhaCLKT7p4XBosRvMrzKx40pb/D0A9sLcQz06p3cnxO0XvXetEc2/Uk7AlHedY3AEcXeN5+6Jjxvvo+NEHbR08r0c/TfxW5v++qIW0NGg43VPq/j+S2i3aXVCjod4uXH2B36GSjH+scv8i1AU0PXoMaGfHVqoPPayKJrwsRV1NWWklLkUJwcyKJ235f/3o+v4YXqu7OX5nNPxc7bgyAPgZWsE2q+K+idF1ZWNgH1SP5pSy236MVhxs1+gP0WQr838r0A9NmMiCvmg97CK09WYtO0XX9eYLHIVabOfVuH/d6Hpo2W0z0USbapNLXkWTXY5Ds1qzYgnZr5dgZj2TtvxfWtn0z24+r9pwcHdz/M5oeLla4+gLwBrAZVXu2xn4DyreVO5w4IsVt12PJrevUyOmpK3M/63o4NYnaDiN2wz94e6i/iYcpcbAbXUec2B0XesMfiM056D8D7wjakG+1VWgGdKPxjc0MbN8SVv+L+XW7gxdbAB8tMrt3cnxq6NyxLWGCGZSvZptK7AD1VcxnFPl8YOi67RM2lyZ/1vJ1pnhzOj6tjqPGYLKBb8IPF/nceuhNZZzq9y3FiopeRvwenTbCDT+VG85Yxb1R58BMyuetOX/R6Lr7lRFPRY4v8rt3cnxO0e312oMbIqOJ5WrGzZBcwGqPe8qVNq43MFo9UEc8yHisDL/lxoDA4KG07iZ0XW9uQAfQK2deo8BLR15osZ9R6Cus/KZptvQeb7AesD+XbxP2g3AjQGzokpb/r8I9VY0Wh55f3Qgnl3lvu7k+J1RKeHScvWjKh4/mupDF6VGROl4c0yN9xuFlqLPQyWW02Jl/m9F3TLDg4bTmNJ8gbeoX1+gkSEC0ASVahv0rI4+KKcA/yi7vbTVY/l8gWOAK7t4n7QbgfZ4MLPiSVv+fwrtKbMP8JEuHvspNHG7slZASXdy/KbouLIcGEfnSdVP0HkuwQB0YJ+NVq21AmOrvN9uqEegH3AnmqSYFh3y/95o8kPa7Y3GbG6u85iBqPW2ApWHrGc6agmWP251NJO0slUIcBAdx9eOpL2notKlaGVBFlwHfDt0EGYWRBrzfyvaAXAOKvZWOYyxJZqV/8EuXqc7Of5G4OLo62PRMEK5Q9BwQmljvMFRDOfR3mOwB/p91vNz4OwuHpOklfm/BXV/345aOSsCBlXLaWgZxjQ0H2AxKhbxZ1ShCjSx72toJurU6LanUDnLQ6hdxWoj4ITosZPQ7+CndF46AmoE/Aa1opaiLTVrlb68FI19/aCBny+0+9AH+hehAzGzxKU5/68DHI8OzH1QF/v7aFjgVzS2CqLRHL9edPsdqKrs76u81tFo6eOLUTw/i67PRsekd1ADoZ6JaO7BnugYElqH/L8m+hC4JG18stQz8DzZn/dgZj3j/J+8l4Ffhg4i8jxR/m9FrZzlxL/Nr6VfP9Rt9lzoQMwsCOf/5lgbDSvMrHLfAtoLFYXUIf+3oq6W/6Ca0FYsk1A31/OB4zCzMJz/m2MQWtZYuTV0HzSU/XCnZySvQ/4vTYZ4ls4TJqznhpCuQh61rIlaqWkpgGFmyXP+j99jaKXZworbj0IlitOwb02H/N83uvEftC+ds54ZDnwLTWLcFs1knYq6YE6p87yQNqTj8kkzKx7n//itQBvaHYImv7+NViAMQ0Xx0rBRUdX8/zmytw+z9Z5XEZiZ838xdcj/pWGCR9GmDd69rlg2wj0DZkXn/F9MVfP/AFQVKS3bKlrzDUcbVGxc5zGD0TrkY9GGG9W2ATWzbHP+L55O+b80Z2AJmt24HdW357X82RoVyXgs+n4w+mBsiZbDbIjOFFqAMeiz4q5Es/xx/i+eyvy/sjEA+hBsjyY9WP5tj8pyrg08gEp+LkMNgNYaz3k2mdDMrAnGo/XtpcuU6PqzOP8XTSn/Ly/dUN4YuBkte/AudsWwO3A52sv7S8B3gFWo3RAAbeDxBbR+9vno2pscmYU3CBUOKh3oJ5V9PwrNbn8D/c++GF0eja4X4PxfNKX8v1JL2dcD0B7LH6b2ns6WD2NQdayNKesmQq3F01BvwbiK5ywFzgSeQQlmMko4I1CimYMKbLwMvFZx/Tqdd/wys671Qf+LE9AOfauVXY+PLq2oy/dFdLB/Ce2iV/p+bgPv4/xfHFXzf0vFg65FZ4rHJxeXBXAwcDo6qFfbnGQj4IdoPexI9DmZA+wL3FPjNcegxLR6jet+0Xu9TsfGwitlt81GSc0s7/qjnrix6OBe/r9SOtD3RRvzvIYa2v+puLyK/nfejykm5/9iqJr/KxsDhwPfRWd9advByuJzFTpz+EoXj5tOe6NgENrZqzfbnbags5zyM5wJ0W3jUXIs7T++FDUOZtPeUJiNkt/r6CxmNprnYBZaC/r8jomuS5/n0gF/fHRf6fO9BH1+36S9QVzem/YGje3KFyfn/2Komv8rGwMj0YdxRzS5wPJnOEo8u9L4zOGpwJFoJ8akkkTpzKkysY5DybX0fd8opiWoO7T8Mq/KbaWLWS3D0Th7I5fBZc+bTcdG6xt0btC+m8hP0DPO//lXM/9XNgYArgOeBo5pflwWwKHA91HrP67uxTQYQP2kPbLi+5JlaAJV6fJWdFkYXc8ru610e+mxefr95UUbKvk6PLoeFt1W/v3w6FJ67Aj0+SlZSHtjcg61G5Rzyd+wlvN/vh1KjfxfrTFwACpRuDqwuNmRWeLuBO5AZ/mmnoURZZdhdDxojKy4rfzgUr4Z1Qq0TOctdPa3GB1U3kMNhyXowLEoum1edL0IDYm8Hb1O6blEt5UmXi6kbBlQhoyMrvug3xlo/sjQ6OuBaAhqUPT18Oj+0gF6cPTYftFrlZ47OLp/BR3z2HzaG3KlS/lt5dfzo69Lfx9z/s+7mvm/WmOgPxqz+gpwcXPjsoStA/wrun4mcCx5VDrglQ5ww9DBqw39Xw2JLv2j2/pFj+kbXUP7ARDaD5DQfkAEtehby64rLUP/2/V2zpxP9SGfETVes2RpFH+l8ljK4ystPS01fKC9YUR0vaTsekH02IXoYPQuOmC/F8VcajiVN5QsPs7/+dWj/P8j4PZmRWTB/BS4JXQQlgqD0Jl2+WVY3WdYUTj/51OP8v8U1BLfKu5oLJiR6Gxrn9CBmFmqTcH5P296lf8vAS6KNRwL6evAE9TvAjYzA+f/vOlV/t8CtQ7XjjMiC2IwKlByROhAzCwTnP/zI5b8fx1wfizhWEgnoL0Eqk38MjOrxvk/H2LJ/5uh1uH0GAKyMIai4ieHhQ7EzDLF+T/7Ys3/lwHXxPFCFsQP0XKSvl090MysgvN/tsWa/6eitb57xPFilqjpaO32zqEDMbNMcv7Prqbk/9NQ68JjztnRAtwK/CF0IGaWac7/2dO0/D8U7XL0nbhf2JrmIFTNbdXQgZhZpjn/Z09T8/+eqDyoJ5Ok33C0LerRoQOxTDkcOC50EJZKzv/ZkUj+vxBtbenJaOl2LjAL/52sa6sCZwE/AR7BG1hZbc7/2XAuCeT/NtRddEoz38R65VC0IYxb8NZdl+LGgNXm/J9+h5Jg/t8J7Ra2QxJvZt0yHX0QPh06EMskNwasK87/6RUk/5+KyhtOSPJNra7BwD+Ac0IHYpnlxoA1wvk/fYLl/1bgeuBveLlJWpyDPgyDQwdimeXGgDXC+T99gub/0cCzqHZ1S4gAbKXjgbeA9UMHYpnmxoA1yvk/PVKR/9cCXkOzkS2MT6IlP64QZr3lxoB1h/N/eKnK/5ujVslJoQMpoF3RB8ETBi0ObgxYdzn/h5PK/L8zCuozoQMpkC3QP6GLxFhc3BiwnnD+T16q8/8n0aYIHwkdSAFsCMxGO1KZxcWNAesp5//kZCL/fw59II4MHUiO7QDMA36OJ+5YvNwYsN5w/m++TOX//dCWl6eHDiSHPgy8g3+31jNDgSeBk2vcfz3ejMZ6x/m/eTKZ/2cC89Es09awoeTGoWhc7vOB47DsGgo8DXy/7LbhwBmo7vx84Hm0ZtmNAuupmTj/x+1QMpz/N0HLTi7GhXB6oxXVAn8Xj8eZWTY4/8cjN/l/TeCfqDKSN87pvrHAjcDrwI6BYzEz6w7n/97JXf4fBPwMjXU0dX/lnNkB7Ul9O7Ba4FjMzHrC+b9ncp3/D0E7Kv0eGBI4ljRrQf80S9A/Ub+w4ZiZ9Zrzf2MKk/+noy6jx9EkE+toPeBWNNb2wcCxmJnFyfm/vsLl/8Fo9vJS4AJgfNhwUmEIcBpqDV6Efydmlk/O/50VPv9PA25CS1COBvqGDSeYfYDn0JKv3QPHYmaWBOd/cf6PtACfQt0is4APkYGqSjHZCBV5eRv4Bt4X3MyKxfnf+b+TNuBHaILJLGB/8lusYgNU9nUZ6hKaEjQaM7OwnP+tkzHAqajr6HE0AzUv3UczgWvRh+Ay9KEwMzNx/rdORgDfQjszvQh8F5gYNKKeaQO+CDyCSkn+Blg3aERmZunm/G+dDAE+C9yPWlQ3oj2zR4UMqguDUNnIi1GhjaeBrwHjQgZlZpYxzv9W1YbA/6CNVJagD8axpKOltRpwOBoLWgi8iVqBMynOhBgzs2Zx/m+yzARapgXYCtgX2APYGJVtvAu4G7gX1cN+t0nv3xcti9kS2B7YFhWL+DdwAxoXugV4r0nvb2ZWVM7/TZLFxkCl8cBO6I+yPTAjuv0ZVO3quejyAvAGGoeaA7xV4/UGA6Ojyzg0VjUFWAO1QtdHy0CeAu5BH8I7UHeQmZklx/k/JnloDGyL/tCvRN8PRH+wGajFNjW6TAJWofPPvAB4H032qLxvHvAS6pp6Ds1wfQx9yBZGj9kU7f1+IJokYmZmyXD+j0nWGwOrAH9B3UWvNficUqtvWPT9CLSmtfSheAeYi1qPyxp4vTGoxfkasBeaOWpmZs3l/G+AGjJXo6pVoT0CrEBdUGcAfcKGY2aWa87/ttIXUdWqNPgMmuG6Ao1F/RNYO2hEZmb55fxvAEwGHkTjQ2nQBryOPgylyxzgJLI/FGNmlibO/wbol/tnNHM0Te6l44dhBSqreS8wIWBcZmZ54fxvK30a+FnoIKo4DO1AVfmBWI4mp4wMF5qZWS44/xugGaCz0AzQtKnWVbQCVaP6YsC4zMzywPnfVvoJ2skqrcq7it5H61O9O5WZWe85/xugClB3ku4JGYehdarzgWeBj4UNx8wsF5z/baXzgV1CB9GFNvRBOAxYFbgPFbQwM7Oec/43QOUl/xI6iAb1Lfv6DODgUIGYmeWA87+tdB6wY+ggemAVtBd3mru2zMzSzPnfAO0VfXfoIHrhbGDP0EGYmWWQ87+t9N/Ax0MH0QvTUJEMMzPrHud/AzT+8ndgQOhAeukWVELTzMwa4/yfoLTvrrQXWrN5fehAeul9YGfg9tCBmJllhPO/rfR7YPPQQcRgCPBA6CDMzDLE+d+A9i6ivMzEvAqNH5mZWX3O/wlL8zDBtqiIw7WhA4nJMGA6KkRhZma1Of8nLM3VkXYgX2Mst5O+bTfNzNLI+d9WugaYGDqImD0SOgAzswxw/k9YmnsGVgdeCh1EzGYDo0MHYWaWcs7/CUtrY2Aw8HboIJrgCWC90EGYmaWY838AaW0MTABeDh1EE7yMfjYzM6vO+T+AtDYGRgNzQgfRBLOBMaGDMDNLMef/ANLaGBgIvBs6iCZYQvZLa5qZNZPzfwBpbQxAfopNlGtFpSnNzKw25/+EpbUxsBAYHjqIJhgGvBU6CDOzFHP+DyCtjYHXgFVDB9EEY4E3QgdhZpZizv8BuDGQrPWAp0IHYWaWYs7/AaS1MbACrTMdETqQmE0DngkdhJlZijn/B5DWxgDAHcBOoYOI0UTgFVI8gcTMLCWc/xOW5sbAtcB+oYOI0X7A9aGDMDPLAOd/W6kFeJD8zCr9GymuS21mliLO/wnrEzqALgwAtgHuDh1IL+2CylBeFDoQM7OMcP63lQYCs4C20IH0Qgsa/1ordCBmZhni/J+gtPcMLAMWAIcD1wWOpaeOBBYDl4QOxMwsQ5z/rZMrgX1CB9ED6wAPAENCB2JmllHO/7bSKOAhYOPQgXTDWPRB2DB0IGZmGeb8bx1MQr/cGaEDaUAbcBewc+hAzMxywPnfOlgLeBjYJHQgdYwH7gE+GDoQM7Mccf63DtZEv+wvhA6kil3R7NcdQwdiZpZDzv/WwQDgLDRDc1zgWAAG+9OssAAAAkZJREFUA/+NKkyNDRyLmVmeOf9bJ/ugKlXfAYYGeP8+aNnLLOBLpLu8s5lZnjj/Wwf9UJfRI8C3gdUTeM8R0Xs+APyA/JTMNDPLEud/62QQcAhwK3AN8Ani7bIZCuwFnIs+BMcBq8T4+mZm1jPO/73UEjqAJlkH+CiqCT0U1bZ+BHgyuszv4vmDoteYBqyPJoT0R2Ulr0YTWMzMLH2c/3sgr42BcoOAbdH61HWiywhgRXT/YmA5HatELUEfmqeAx4E7gbkJxWtmZvFw/reGDUCzQc3MrFic/83MzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzKyIilB0qFFrA4cCb6N6188C54cMyMzMEuH8bwBsDVwGjCy77QJgSpBozMwsKc7/BqgC1XPAmmW3jQFmA2sFicjMzJLg/G8rfQW4MHQQZmaWOOf/SGvoAFLgCOCh0EGYmVninP8jRZ9A2IomjByMuosGoF2sJgI/AZaFC83MzJrI+d9Wmoy2sjyZjpNH9gEuDxGQmZklwvm/TNGHCYZH1wuBeWW3XwtsD8xMOiAzM0uE83+ZojcGXomuH6u4fQXwNLBzsuGYmVlCnP/LFL0xMBeNGb1b5b6lwKRkwzEzs4Q4/5cpemMA4Blg9Sq3D4zuMzOzfHL+j7gxAL8Etq24rR8wHbg5+XDMzCwhzv8FNRR4Es0eLRmAxoxWK7vteOA3yYVlZmZN5vxfR9/QAQTQCvQp+34JmijydeBNNMN0NvC55EMzM7Mmcv43MzMzMzMzM+vk/wNcpdr4vBIxawAAAABJRU5ErkJggg==) # Transition matrix: # $M = \left( \begin{array}{ccc} .6 & .3 & .1 \\ .3 & .5 & .2 \\ .1 & .3 & .6 \end{array}\right)$ # Initial probabilities # If we have no information about the recent weather, then 35% of the time it # is hot, 37.5% it is warm and 27.5% it is cold. # $\pi = \left( \begin{array}{c} .350 \\ .375 \\ .275 \end{array} \right)$ # # Add Observations # ![hmm.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAzAAAAPJCAYAAADNlc3KAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nOzdeZzbVbn48c/MdC90gdJCWcq+LwUEKTsICCJwBRURRVFBUC+CXkQFFC+I4IZ4UUAUELmI7IIKyKYCotyyyFL2nSJLCy3Q0j2/P57kl3TI7EnO95t83q9XXjPtJJlnJpnkPOec5zkdSJKUPysAU4EJwFvAPKAArA5sD3wTWKl4HUmSJElKai0iYal2WQgcmS40SVI9DUodgCRJ/fRn4BVgPWA54HlixeUc4JmEcUmSJEnSUtYCTk4dhCSp8dpTByBJkiRJvWUCI0mSJCk3TGAkSZIk5YZF/JKkvBsPvIco5n8R+BMwJ2lEkqS66UgdgCRJ/bAckbSsDawL3AncAawKXAw8AjyXLDpJkiRJqrAm8BiwcpWvvRdYAOzd0IgkSZIkqQsdwLhuvv4n4ElgSGPCkSQ1ilvIJEl5VADmdvP1VYCPEtvI7mtIRJKkhrALmSSpGT1f/Lh10igkSTVnAiNJakalAv4Nk0YhSao5ExhJUt4cBrwM7NXNdRYUP46qfziSpEYygZEk5c3ewARgSjfXGVP8+Hw315Ek5ZAHWUqS8mYacEXx0pUNih9vqX84kiRJktS1XYGjerjOX4AXgGXrHo0kSZKkljcIuBp4BNi+ytcvoLzK0tneRJvlfesTmiRJkiQtbWciCSkA51T5+gTgMuDATv//IWA68Nl6BidJkiRJlZYFbgQeBbbq4jqDgBOBu4hal98Tyc56jQhQkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJLawtdQCSJPXCcGAkMKp4GQnMB+YUL28Bs4ElqQKUJDWGCYwkKSvGAxsB6xUv6wPrAqsD7Z2u+w4wBOjo9P9zgMeBx4qXR4v/ngbMq1PckqQGMoGRJKWyAvABYFdgO2AtYiWllIA8Wvz4LPAmkZy8DcwCCsX7GEasxowGlgHGEsnPusAGxc9XBxYBU4E7gBuLHxfW9aeTJEmSlHsrAkcDdwGLgaeBc4BPEglMPQwFtgWOBa4jEqFZwCXA3sDgOn1fSZIkSTnUBuwJ/IFYCXkQ+AaxXSyFYcBewK+IROZV4HRgUqJ4JEmSJGXAIOBQYhvY28DPgc2SRvRuw4ADgb8RydUVZC9GSZIkSXXUBnwceJJY3TgOGJM0ot6ZDPyWSGSuBNZJG44kSZKketuUWM2YBXyNKLDPmw2Bq4iuZacCI9KGI0mSJKnWBgEnEGezXEAU6+fd7sAjRFe0KYljkSRJklQjqwD/AF4E3p84llobBvwQWAB8G48fkCRJknJte+Bloj3x2MSx1NMuwCvA74FRiWORJEmS1A8fAt4BTgHaE8fSCKsB9wD3AeMTxyJJkiSpDw4h6l0OSx1Igy0D3Aw8SmydkyRJkpRx/0EkLx9LHUgiw4hDOR8GlksciyRJkqRubAvMBT6fOpDEhgN3FC/DEsciSZIkqYrxwHTibBRF04LHgLNTByJJkiRpae3ATcCtQEfiWLJkM2JFqlW300mSJEmZdDgwg+Y4oLLWjiR+NyukDkSSJEkSjANeAz6TOpCMaidqYS5MHIckSZIk4CzgTjyFvjsbAwuBLVIHIkmSJLWyVYF5wM6J48iDi4FrUgchSZIktbIzgdtSB5ET6wOLiNUYSZIkSQ02HHidOLhSvXM9kfRJkiRJarBPAP8GBqcOJEf2B2bi4ZaSJElSw10FnJE6iJwZDMwC9kwdiCRJktRKSgPx3VMHkkNX4DYySZIkqaF2BOYAQ1MHkkOfAx5JHYQkNav21AFIkjLpPcB9wPzUgeTQP4F1gWVTByJJzcgERpJUzWbAv1IHkVOPAAuATVMHIknNyARGklTNBsC01EHk1CLgUWDD1IFIUjMygZEkVTMReDF1EDn2ErBS6iAkqRmZwEiSOmsDxhNnwKh/XgYmpA5CkpqRCYwkqbNliDbKb6QOJMfeAMakDkKSmpEJjCSps9J7w5KkUeTbYnyPlaS68MVVktRZW/GjCUz/LcH3WEmqC19cJUmdLSp+HJI0inwbRKzCSJJqzARGktTZ28A8YPnUgeTYOOC11EFIUjMygZEkVTOTGISrf8YRv0NJUo2ZwEiSqnkVWDF1EDm2IvE7lCTVmAmMJKmax4H1UgeRY+sSv0NJUo2ZwEiSqnkE2CB1EDm1MjAKeDR1IJLUjExgJEnVTAM2Sh1ETm0IzAZeSh2IJDUjExhJUjX/B6wGrJI6kBzalvj9SZLqwARGklTNs8ALwHaJ48ijHYDbUwchSc3KBEaS1JU7iMG4em8wsA3xu5MkSZLUQIcBT6QOImd2BuYAwxPHIUmSJLWcCcBiLObvi58AV6cOQpIkSWpVdwLHpw4iR54GPp06CEmSJKlVfQ2YmjqInNgCWASskDoQSZIkqVWtQgzKN00dSA78FPhj6iAkSZKkVncj8OPUQWTcEOA14IDUgUiSJEmt7kBgBjA0dSAZ5u9IkiRJyoihxOrCQakDybBbgR+mDkKSJElS+A5wP9CWOpAM2pKoE1ozdSCSJEmSwgrAXGDX1IFk0G+BS1IHIUmSJGlpZwN/Sh1ExkwCFgJbpw5EkiRJ0tLWIQbrW6UOJEPOBm5JHYQkSZKk6s4H/pw6iET2ByZW/HsNYD6wY5pwJEmSJPVkEjCP1quFWY4o1H8D2KP4fxcD1yWLSJIkSVKv/BS4K3UQDXYo8A6whEhkfklsp5ucMihJkiRJPVsReJM4vLFV3EYkL4XiZTFxcOXE7m4kSZIkKRu+BrwAjEwdSAOMJVZdCp0uC4DZwF7pQpMkSZLUG0OAR4GTUwfSAJ8lto91TmAKlLeU/Q8wOFWAkiRJknq2O1HQv07qQOrsVpbePlbtMo+oCxqdKEZJkiRJvXANcAPQljqQOil1H+sueSklMP8GVkkTpiRJkqTemAi8TnTpakbdbR8rXRYCVxHJjiRJkqSMOwyYRXOuPnTuPlZ5mQ+8DRycLDpJkiRJfdYG3EjzHerYVfexUvJyK7BysugkSZIk9dsaxGrEJ1IHUkPVto8tKF6+TPPW/UiSJEkt4XDgLZqnK1nn7WPzgfuBdVMGJUmSJKl2LgXuJv9nolRuH1tUvJxO/n8uSZIkSRXGAM8B/506kAE6knLy8hSwZdpwJEkl7t+VJNXaTsBNwKbAo728zVBgElFLsyKwfPGyQvHjqOL1RgJDOt32LSLRWAzM7HSZATwPPAu81Ief4dliPOcAXyFqYSRJGWACI0mqh8lEvUhnywGbAZsULxsQSctK1P89aR6xOvQ08GDF5RGiML/SKUTydXGdY5Ik9ZEJjCSpXgYDmwPbAtsB21D9rJhXiBWPZ4ofp1NePZlR/Pyt4nVnE4X1lYYDw4B2yis3lSs4pZWdNYDVgRGdbr8QeBi4E/h78eNzffxZJUkNYgIjSaqljYG9gD2JhKUyWZhFrMo8CDxQvDwMzGlwjCsTcZZWgjYFNgQGVVxnOnALcD2xHW5mg2OUJEmSVAeDgQ8AvwBeYOkzUx4HLgQOAzYiVkiyaiSwC3AC8Cci2Sr9HIuAu4pfWz9VgJIkSZL6pwPYHjiT2P5VGujPIVYrvkxs28qzDqLz2HHAHUSDgNLP+TBwEiYzkiRJUqatA5wGvEx5MP8acC6wG+/uDtZMJhArSTdRPh+mQKzMfIZYwZEkSZKU2GDgIOBWyqfSvwn8CtiDpetGWsV44pyYv1H+ncwGfk7U1UiSJElqsFHEVrDnKa82TAUOB5ZJGFfWrEusSlVupbsD2Acb5UiSJEl1tyrwE2KVpUAc4PgLXFnoyRDgo0Qb5lIi8xBwKLGKJUmSJKmGViBWEt6hXNtyGjAxZVA5tSVwEXG+TIE42+ZwWnO7nSRJklRT44AfEB3ECsT5J18iDofUwKwNXEA5kXkU+BhuLZMkSZL6bDCxKvAa5RWX4zBxqYc1iE5tpe5lU4k21JIkSZJ64QPEakABeAs4HtsAN8JGwB+I3/sS4DfAykkjkiRJkjJsFeBaYgC9GLgQa1xSeD8wjXgc3gaOIQ7OlCRJkgS0E+eWzCYGzf8AtkoakQYB/wm8QTwmdwObJo1IkiRJyoB1iAMXS9vFjiISGmXDROAq4vFZAJxCtGSWJEmSWs4hRNJSAK4HJqUNR93YB3iBeKweBDZJG44kSZLUOOOAqymvunwmbTjqpbHApcTjNgc4Im04kiRJUv3tSJzlUqp1WTttOOqHQ4A3icfwGmB02nAkSZKk+jiaODRxMXAynvyeZ2sSCWgBeBzYOG04kiRJUu0MI058LwAzgT3ThqMaGQScRjyuc4FPpQ1HkiRJGrhVgPuJQe69xKnvai6HEAlMAfge0JY2HEmSJKl/JgMvEgPb3wDD04ajOpoMPEs81r8jVt0kSZKk3NidOJhyCXASzsq3ghWJAy8LwF3A+LThSJIkSb3zKaJYfz6xvUitYyTweyKJeQxYLW04kiRJUve+QKy6zAZ2SRyL0ugAziKSmGexVbYkSZIy6lgieXkD2CZxLErvOCKJeRnYJHEskiRJ0lJOJAarrwCbJo5F2fEN4nnxGrBZ4lgkSZIkAP6LGKROB9ZPHIuy5yhiZe4VfH5IkiQpsc8Qg9NXgQ0Tx6Ls+jKR5L4IrJk4FkmSJLWoQ4DFwOu4PUg9+zaRxDwFrJw4FkmSJLWYXYg2ybOBrRPHovw4nUhiHgRGJ45FkiRJLWIjotPYAuLASqm32oBfE0nM9cCgtOFIkiSp2a0EPEcMQI9IHIvyaTBwM/Ec+mXiWCRJktTEhgNTiYHnyYljUb6NBaYRz6WvJI5FkiRJTepCYsB5KbEVSBqINYjzYRYSNVWSJElSzXyBSF4eAEYmjkXNY1dgETADWD1tKJIkSWoWU4iOY28AayeORc3n60Ry/A9gaOJYJEmSlHOjgWeJwyo/lDYUNak24HIiiflR4lgkSZKUc5cQA8vvpw5ETW008AyRKO+VOBZJkiTl1CcpHzo4LHEsan7bEfUw04FxiWORJElSzqwOzAbmAhumDUUt5GQiab4qdSCSJEnKjzbgz8RA8ouJY1FrGQT8k3juHZg4FkmSJOXEp4kB5F/wvBc13vrAPOKMGLeSSZIkqVvjgFeJAeQGiWNR6/oOkUSfnzoQSZIkZdtviYHjN1IHopY2FHiY6Er2vsSxSJIkKaN2JJKX+4laBCml7YgE5iF8PkqSJKmTdmAqkcDslDgWqeRi4jn5pdSBSJIkKVsOIwaKV6QORKqwMvA28DoW9EuSJKloWeBlonB/zcSxSJ19i0iuf5o6EEmSJGXDicQA8fTUgUhVjABeAOYDkxLHIkmSpMTGENtzZgPLJY5F6soRRJJ9XupAJEmSlNYpxMDwpMRxSN0ZDDwFLALWTRyLJEmSEhkHvAnMBEYnjkXqyWeIZPui1IFIkiQpjZOIAeHxieOQemMQ8ASwEGthJEmSWs4wovPY21j7ovw4kki6f5w6EEmSJDXW54mB4BmpA5H6oJR4v0k0oJAkSVILaAOmEQXRnvuivDmJSL6PSxyHJEmSGuT9xADwstSBSP2wAjAXeA7oSByLJEmSGuAKIoHZKXUgUj9dRDyH90odiCRJkuprArAAeJLYSibl0U5EAnNV6kAkSZJUX18nBn7Hpg5EGqBpREvliakDkSRJUv08DswHxqcORBqgY4lk/GupA5EkSVJ9vIcY8P0+dSBSDaxEdNK7N3UgkiRJqo/vEwnMQakDkWrkNuI5vX7qQCRJklRbbcAzwDvAqMSxSLVyBJHAnJA6EEmSJNXWNsRA74rUgUg1NI4o5H8gdSCSJEmqrVOJBOZjqQORauxm4rm9ZupAJEmSVDv3EQXPy6UORKqxrxAJzBdSByJJkqTaWBFYAtyROhCpDjYgEpjrUgciSZKk2jiUGOAdnzoQqU6eAt4GhqYORJLaUwcgSU1gj+LHG5JGIdXPn4GRwA6pA5EkExhJGrgdgFlEHYzUjG4tfjSBkSRJyrnVie1jf0wch1RPE4nn+c2pA5EkV2AkaWC2K378e9IopPp6CXiWOO9oUNpQJLU6X4SkbBkMfAr4INGSd2Tx8kfgLGBOutDUhVICc2fSKKT6uxM4GNgMuCdxLMoe378kqQWNAa4E9gU6Kv5/dWAa8GLxc2XLVMpv1lIz+wKeB6PqfP+SpBZ1PrBqF1/bhhg4PNC4cNQLHcBc4g1aanbbEq9D56QORJnj+5cktaARwDvEtozBVb4+qPj1ArBOA+NS9zYkHpNLUwciNcCyxIGtbpdUJd+/1HAW8UvZMJY4IG4yMK7K1xcBbxQ/n9CooNSjTYofH0wahdQYbxGF/JsAbWlDUYb4/iVJLWxHuj5jYRCwmJj9XLZhEaknpxCzivukDqSojdhrPgeYAbwGzARmE1vdFlI+z6PSVsAC4M1Ot3uTmDldBHy1i++5CjAfmFe8/kziTJx3gCdr8DMpW64hnvNrpA5EmeL7lyTpXXYhBg3XpA5ES7mceFzWTh1IhVHAFpRjKwAvAAcQcQ6pcps2YHliS9z0itvdAGxPnAHS3Yz7GsAXiUFKAbga2BNYZsA/jbLmVOIx3jN1IMoN378kqUXdRMxqd1UkqTRKHciq7ftO7VOUE5Ez+3C7Kytut0cfv+cLwI/7eBvly+HEc+OI1IEoN3z/Us1ZAyNlWzuxTWl1YAoxQFR2rE6sWCxMHEc1Myo+H92H271S8fmYPtxuW+J3cVwfbqP8eab4cfWUQSgXfP9S3XiQpZQ9awJHApOIA+MuJIpm5yWMSe+2LLHt6qHUgXRhZsXn1QpruzK/4vPle3mbNuBk4BCymcypdkoJjDUwqsb3L0lqcW3A+sAxxFalvdKGo042IbbSXJg4jq6sQ3kr2F29vM1EouC/dLvje3m7Q4jaCDW/IUSt0z9TB6JM8/1LksT7iAHl6akD0f+3B/GYfDd1IF1YjnIi8ngvb/Mb4KyK2/WmnmUUcDcwsh8xKp9eAZ5LHYRyw/cvSWphNxFvAvulDkQAHEQ8Hl9JHUgX2ih3BZvZw3Uh2qCeDXyQcgJzYS9u9xOiw5lax8NEq26pt3z/kqQW9UX6th1I9fWfxONxSOpAujGDiHEJ0NHN9TqA24hVmymUE5jrerj/TYA/DTxM5czfiOfH8NSBKDd8/1JN2YVMyoYdiXqDFbu5zrPFj5tT/SwPNVapwL03qxuplDqRtRGnZXflS8AlwOss/fMs18P9/wg4ut/Rhb2Ba4ltaDcDl9H7DleHAf8AHgT27/S1vYC/EgOmu4HPV7l9G/Ez3AY8AHyDpRO9FYtfv7V4nfuIsyw27XQ/44ATiVnmW4B7icRuq17+HHlTeo70tsmDmpvvX5LUol6l5/M69i1eZyEwrBFBtYDtiEHpCcB6fbztT4nHY0qtg6qhOymvpnT1840nBt6lCa3lK27zSDf3fTADK9xfEbieeO5XJh+bEnGvQhT/3tHF7TcHfkbEfTyxylQ6Cfw/gSeJFSKIn+lZ4EOd7uOU4nUBPk78zN8q/vsQIjnamXJSswxwD9FRaePi/+1ZvN4elA/7HEokPYvJ9vOjv35J/K4268NtBhO/qwvo+jFVPvn+JUkt6n7iTaC7gwOPId4A7m1IRK3hYOINdR4xAH6KGMCu34vblgZxnWfjs+RaysnItl1c53yWXilop1w782oXt1mWGLT3t3B/EtGOdyblJKPSIcDTxRi6Kha/jGggALBr8bo3EgnDS0Q715JfFb9+RcX/jScSpJKtitd5CfgccDnVjxr4cvF6vyX2899A9dWtgyti6k4bcBTwd2Aa8ARwFdmuFTiT+Nm26eF6paTlQuBNYAHx92ar7ebi+5cktagzgff0cJ1LiDeAb9c/nJZxMFGMXKi4lJKZZ4CTgA26uO1Fxev3JtlJ5QLKP9e+Vb4+BTi3yv+XamcWUV5VqPQj+l+4P4pYHSkQqx5dXWd+8ToXVPn6KsD/Vvy7VLczh1hRm1Lla4VOtzkGOLbi35+quN7f6HqWuHQS/QxiJaGr7TD7UU6IujKCaJxwEOUVsPHA9ynXIHW39S+VHxDx7VTlax3A9kQ3u1mUE5bKvzETmObi+5ckNbFBwNXEtpztO31tArG3vqvByobEYPIB3D9cS9USmMrLXJZOZjasuO3vitepnOnPmh9S/lkO7fS1dmLrWLU6hscqbje609c2BP44gJguLt5vd9uIRhPP9wLwySpfPxr4aMW/j6Qc7392uu4k4m/uHmDtiv+/nUiESkqrCm8DK3cT2xmUB+EbdXO9o+l5G94ZdL0y9tXi7W/u5vapnELEVppx703SYgKTX929d4HvX5LU1Ham/AZ+TpWvv4coAP4wS7/Ib0qc43E3Sw/ANHA9JTDdJTM3F/9/lc53miHfoBz/f3X62pFUL2yH2M5Uul3nE9dvANbtZzzbVNzv57q53n4V16v2+72N8vYxgPMoJwvddVurdHKnf99evI+f9nC7UjvYy3q4XilRu7qLr69NtKDuyiBiO1lXq2cpfYuI6zh6n7SYwOTXznT/3gW+f6nBqu3vlVQf9wB/JmaEf1Xl61OBDxAzyJcQbwKjiBnhnxMDhUUNiTTf2omZwNHFy6iKy7LF/xtDFFpvRu9nBEstY1dn6W0QWR6MVXYUq1xpGUcUtO/Zxe1mVHy+PJG0ARxI7GHv7cGYnR1f/LiQqPPoyvuKHx8HXqzy9bOImoqS9xY/XkbU7/TGiRWfjwS2Ln7+525uM6jie3XXProN2L34eVcrKAcQdTZdWUSs8h0PfISoZ8qK0tjhtH7evoMYCM8itmy+RTyes4DZFf9+E3iDeA1UOj29d4HvX2owExipcd4C3t/DdeYT2360tKHARGJrz8pEB6txwApEzcC4TpfKuo23KQ+GSpfSwKmvFhGDr7eIFZkVyXY7+spEZFzF598lBvBLurhdtVbKyxAF7Lu/++q9Mory8/9momVzV0oJzK1dfP3Kis9XpdwI4IZ+xrYzMeBaTNS/dOU9RBIMMdvclW2I5yVEp7VqNiZWyL5F16s+jxY/Zq3OqpS0P0kk9AXi76IvfwttxOreMMoTDGMqPq+cWJhHPJdnEMXir3X690vAdODfwCvFeFQ7vXnvAt+/1EAmMJKyYBxRS7ImsBYxKFqJGJyuRCQqAO9QHqjMIAYy0ygPZkr/N5OYyZ1F94OZg4nBZnevhQuKX38duJSYFb+TmO3/MJFcZVW1FZitiMHjP/t4uxOJ4v3+nsC+C9GVCrpOTCCSwg17cb2SvYsfXye2qfRHaXD2fyy9stPZrsWPTwAvdHO9g4sf7ya6qVUznFgN3JWuE5i5xY9ZG5CXHsejgL8AuxE/835EEjOI7pOZxXS9fbGklNiMZenJicoJi3WK/16ZqMNoJ/5eXyZW7l4qfny6eHmKWE2c38ufU1JGmcBIapRRxEzyhkRnr7UoJy2jiVnd54iBxjPE4O9KIlkpDUa6m7Wvpa6SlkKn60C2i1I7JyJtRO1HtcL4rm63HPG4bUTUPPTX6hWfP9jN9UqrL4vpXQH7PsWPN9H77WOdlVaVultVgXIC011cgyg3GLiom+t9hXhOdbeVbq3ix4d6iKvRSs/5+cSkwnXFy3D6nsx0ZV7x8irRVKIng4gkpjTpsSqxarsKMUmxJpHsFIhJkFJC8wRRO/Uw8brjNicpB0xgJNXaCKJwc1MiUSklLKsSScoTlM+7uJHy7OgLpBk8lAZXpaRlBtFu93dEEtXV7HdpFjcvCcw44DNELcVrPdyucw3MD4kB90CMqfi8uwF5KYG5i6h/KDmUd7dUHkqs7ED/O6OtRHmLVncJzFDi4NOerrcHsWK4kHgOlaxBnFj+6+K/nye6kHVn5158vxRKq46dVzIqk5lliNWxjxO1VgVi5aZeWy4XEYnJ9G6uswxLr/SuSSSlXyKSnQVEsvQI8Ro1jWjL/RTZWwWTJEn9NIoYlB1NzDY/RAwk5hNv/L8lipAPIJKYwdXvJpkDiYHJy8T2qPdS/dyTas4u3ran8w9SGkK5e9AbRMLYmwHk/hW3exw4tQaxfKJ4f7O6uc4QYqWtQLTqLZlI9VWP3YvXXUK55qSvPl68j3fofjtg6bDMxZTrgqo5v3i9azr9/6nEeS+9tWrxe02j953VGuXn9O25vwzxt3Yt5VWbrBlDnBn0WSJhv55YES4Q21H/AvyYWL3ciOw9JpIkqYo2YjXls8Qg7RFi4DiXmC3/OdEadwuyvSpRaRjRiay3SUulU4nBTX+L2hvlLcqD/Ck9XLdkJ8oJzPNEl66BmkzPCUzp7JQCMeAt+SpRb9FZ6UDF7up5evILenfeyneL15vaw/We593xDyY6OfXl7+LXxIrADn24TaNcSv/PQBpJNDDIi3HE3/hxxIraE8Tf0hzgr8D3iG2M47q6A0mS1DhDiNWVE4jtOa8Tb9wPA78ktvRsTOtuRS0dNPix1IH04FkizvP7cJuNKScSB9QwltLZOdUGvp8iBoivF68zueJrd1Nu5FDpvuJ1TxpATI9SPtOkO3cVr3d6N9dpJxL6zufnnAAc0YeYdiveR19u00ilx7HzIaetYhSR5H+d8pbMAvFcugA4jGgwIEmS6qydWD05lmhHO6d4uZUo/P4AXZ+03Io+TQxavpg4jp5MJbaP9WWL1UrEz9bdmSj9sQGxXe9KyonvKGIW+yfEc/C04vf+YPHrXwS+U+W+ViAS6gLlM1z6agLlRG2zbq43nFgNKQDb9nCfvyteb+Xivz8BnNuHmFYmfkdf68NtGu0+4vfRn5XLZrUe8ZpwHjHJUyC2oJ1PbFNcMVlkkiQ1mQnElrDLiMLthcQp7CcTBcRZbhGc2geJQcq3e7piYpcQM8J90UEMvuoxizwBuJB4nv2ZKPjeu+Lr7UTC8jBRs3Mq1QfKKxDbtX7Txdd7o3Qfl/dwHyOIrUO39OJ7jS3GdC9RN/Fdej7jwW4AACAASURBVP93NJzYDvfNXl4/leeJJEtdW4noxnY+5VqaB4nGDbuRn222kiRlwmRiS8s/iSLhJ4EziX3coxLGlTfbEIOSs1IHoqbQDlxNHGyZZW3ENrmstXbOunWIs2+uJOrSZhMTR4dg/YwkSe/STtSynEXMBC4iThv/GrGlR/0zkUhg/pA6EDWFM4nVmqxbkXje97dttWJF7v3Ea/KzxGvyHURd3arpwpIkKb2tidbALxBtS68iWoAu392N1GttxO/VmWgN1DFER7WufKNRgfTCFCKB+VnqQJrIZkS7+fuJVfHbidqvCSmDkiSpUdYjZnGfIops/0hsUXBrWH08SjQ6sJhZ/bU/PR9oeU4jAumlg4gE5tjUgTSpDYiueo8QKzM3ER0fl0kYkyRJNTeS6H5zOzF79xfgcFxpaYQbiMFcfw9RVGvbhlgl7c5kBtYmuta+STznP5w6kBYwmejQ9zxRN/MrYLukEUmSNEBTiJadbwIvEieXr5U0otZzFjGYy+Jhg8q2tYBXiGYad1W53A08RnQGPChRjNVcQDznt0gdSAtpB/Ykiv7nE6szx+IWM0lSTgwBPkIMcErbCz5C6x4mmdoRxGDuC6kDUe78ifIZND1dspQs3EO89gxPHUiLGkOssN9HJDOX0fPZRJIkJTGBOCn8RWLW9jRglaQRCWLgUADOTh2I1AAdRAvlaakDEQBbAhcR9Y5TiXrHwUkjkiQJ2Ih4g5pPvEF9Eg+XzJJlidPg70wdiNQAGxAJ+6WpA9FSViUmtWYS9TLHYtG/JCmBzYkDzxYR7Y8t3Myup4k6pPbUgUh19lEigTkhdSCqagSxvexxYAZwIrHlTJKkutqc2NO8CLiObO19V3WXE4O6jVIHItXZj4jn+gdSB6JutQP7EPVKbxKHpK6YNCJJUlOaTLTkXUB0+Vk3bTjqg2OIQd1hqQOR6uyfRKv2sakDUa+0E+cM3Uu0YT4ZzwSTJNXAJOA3RKvUi4DVk0aj/tiaSGAuTByHVE/DiVq8B1IHoj5rA/YFHgZeBY4iOlpKktQnyxFFl+8QrZA3TxuOBmAwMIfYdy41q52IRP3nqQNRv7UTbfefAZ4j6mWs3ZMk9agD+CLwOrE/ebe04ahGbiO6ka2UOhCpTk4gEphPpA5EAzYc+AYwC/gH0Y5ZkqSqphB7kV8BPk0s66s5lAZ3n04ch1QvdxBJ+sTUgahmxgG/ILYw/xxrmyRJFZYjusCU6lzGpQ1HdbAlno+h5jWWeP26N3UgqostgLuIc2S+jNvKJKnlHUJsF7uT6DSm5tQGvEw81oMSxyLVWun8l++mDkR10w4cQSQxtwPrpA1HkpTCSsC1RA/+w3G7WCv4NTHI89BRNZvzief2jqkDUd2tAPyOaEziaowktZCPEKcg/w1YO3EsapzSLPUPUwci1dBg4DViZt7VxdbxQeAlYveA55JJUhNbHriKODDsSFx1aTUjgLeB5/GxV/PYk0jMz0sdiBpuPHA58bp2ROJYJEl1sB3RV/8OYI3EsSidy4jB3pTUgUg1cgHxnN49dSBK5mNEy+XLgNGJY5Ek1UAbsU94HtFpbHDacJTYh4nB3hmpA5FqYAixdew13D7W6iYRncqeBbZJG4okaSBWAG4gznV5f+JYlA0jiC2E03HAp/zbj0jIz00diDJhCPATYsLuqMSxSJL6YTKxZew2PNhNSyt1bNo3dSDSAP0Bt0Tq3f6DaBl/MTAscSySpF76IDCbmJV0y5g6m0IM+q5LHYg0ACsDi4BHUgeiTFoLeJg43HTVxLFIkrrRBhwHzAe+lDgWZdv9wGJgtdSBSP10IpGIH506EGXWssDviS2zWyeORZJUxTDicK/XgJ0Sx6Ls+zIx+DspcRxSf3QAzxC1DuMSx6Js6yDOvppLnIUlScqI0UStyyPAmoljUT6MJc5OeAX3iCt/St30fpM6EOXGobg7QZIyYwKxx/f/iK5jUm/9lBgEHpY6EKmP7iKeu5unDkS5sjcwBzgtdSCS1MrWAB4HbgFGJY5F+bMGsBB4FGhPHIvUWzsQycuNqQNRLm0NzCC6MdpKXpIabCPg38Bvid73Un9cTgwG90kdiNRL1xDP2T1SB6Lc2oQo7L8UkxhJapj1gZeBX+DMuQZma2IweDfRxU7Kss2BJcB9+HzVwKwBPEtMAnakDUWSmt86xMzRBZi8qDb+iAdbKh+uI56r+6UORE1hEtHN7ne4EiNJdTOJmDG6FGeMVDtbELPaD2BSrOx6D/E8nYqrL6qdtYEXgQvx9U+Sam5V4DngEkxeVHtXETPbH0kdiNSFG4jn6AdSB6Kmsx5RU3pu6kAkqZmMAR4iildd5lY9bAIsBp4AhiaORepsDyJ5+XvqQNS0NgJeB76dOhBJagZDgJuJIuuRiWNRczuPGCQemzoQqcIg4EFi+9j2iWNRc9sBeAc4MnUgkpRnbcBFwJPA+MSxKH8OA97fh+uPB2YBbwIr1SUiqe+OIhLri/pwmw7gu8BadYlIzWxfYD42NZGkfvs+8AqwZupAlDsTiESkr92ajiUGi+fVPCKp75YHZgJvA6v08ba/J845kvrqGOAtom23JKkPPg7MBbZKHYhy6Tzgr/243RDgcaIexu06Su0CIqE+oR+3XQ9YgM9j9c+ZRNfPcYnjkKTc2BSYA3wmdSDKpQ2IgVt/k9+diXqDR4FhNYpJ6qudGfjz8GfAP7DtsvpuEHAbcAs2z5GkHo0lal5+ljoQ5dYNwMUDvI9fETPf/z3wcKQ+G050xBvoSuAKRF2X7cHVH+OBF4AfpA5EkrKsnTgV/S5sZav+2ZXoojNpgPczljgXYQGxIig10g+IBLoWEznfBJ7G11T1z+bEdu4DUwciSVl1AvAysHLqQJRLw4FHgO/V6P4+TAwiH8StZGqcnYmVl+eBUTW4v+HEIcAn1uC+1JoOI4r610kdiCRlzZZE68a9Uwei3PoRUS8wvIb3eRGRxPykhvcpdWUMkWwsJlYTa+V9xGriljW8T7WWS4CpwODUgUhSVowEHgPOSB2IcmsK9em4tAzRlWwJJteqv0uJhLlWq4iVzgXuJzrtSX01huhKdnLiOCQpM34FPERtZ87VOkYQScbpdbr/bYGFRE2MB1yqXg4jkpep1CfJGEk0SPlOHe5brWEHYqJo58RxSFJyBxBF1xunDkS59VOi9qWedSonEoPLO3ALhWpvK+J18E1g3Tp+n10YWItx6VTgGWpTnyVJuTSWKNo/JnUgyq3SgGzrOn+fNuAqIon5aZ2/l1rLckSXsCU0pt3xWdiYQv03GLgPjzqQ1MLOA+4GOlIHolxamdjWdVKDvt8Y4myOAnBwg76nmlsHcBPxnDqtQd9zJLFieUGDvp+az3uIiaPtUgciSY22I/ECuHnqQJRLg4HbgRtpbAK8CfA2sd1n2wZ+XzWnnxLJyy009nm8HnHA5Rcb+D3VXM4gVvJsCiGpZQwFptG4GUc1n18QhftjEnzvA4g2t68CayX4/moOxxDJy+PA8gm+/75E6/qdEnxv5d8IYuvj8akDkaRG+RbwFPECKPXVEcShaikbP/wXMfh8ElghYRzKpw8Ci4AZ1LdovycnE3WIqySMQfm1N7Ea7USOpKa3MjCHeAOX+moKMA/4aOpAiCLWArGVzWRcvbUd8RqYhW2I7cAfgH8QK+NSX10DXJE6CEmqt18Dt6YOQrm0FlG0X49D/vqjA7iWSGJuwAGgerYFUXuyCPhw4lhKxhLNKX5DJDRSX6xFTCrtmDoQSaqXzYkDAbdIHYhyZzzwGHAl2epaNwS4nkhirgEGpQ1HGbYusV1rCXB44lg6W5uYHLA1rvrjJ8C9mABLalI3E62Tpb4YTZw7cBPZXOVYBriTSGL+l2wlWMqGtYHpxHPk6MSxdGUT4HUa15ZczWMsUc/1idSBSFKt7UsUXq+UOhDlygiixuQu4vyKrBoD3EMMUH9HtHmWADYCXiKeGycmjqUn2xBtwr+aOhDlzjHA82RzkkmS+qWNGNx9N3UgypUhwJ+AB4gZvqwbQyRaBeCPwPC04SgDtgBeo7EHVQ7UbkSDgc+mDkS5MpRIYL6QOhBJqpV9iFk9282qt4YQnW0eByYkjqUvRgF/pXw44TJpw1FC2xMF+0uAoxLH0lcHEmfEZKXRgPLhi7gKI6mJ3E1+Zh+V3ghi5eVJYFLiWPpjBNGVrECsHq2aNhwlsD8wl+g2lteVjEOJJOYzqQNRbgwFXiTO6pKkXNubWH0ZnzoQ5cIYoublYeLMoLwaShT0F4gZyU3ShqMGOhZYTCQw+yeOZaD2I36O/0odiHLjKOA5YhVdknLrH8APUgehXBhPdBu7GxiXOJZaaANOIZKY2cAeacNRnQ0CziYe71eA96YNp2Z2Ad7EVXT1zjCiacVhqQORpP7ajtiCYOcx9WQ14pyXW4FlE8dSa58GFhDbiU7CsxKa0QpEm/gCcSjkOmnDqbmtiDa5P8fnr3r2NWAaMYkjSblzOXBR6iCUeRsDLxBF+81a/Lkr8CoxwL2C5kvSWtkUYt9/gajdykPHvP7YlDjs8n9p3r9T1cYYYuu4q86Scmc1YCExcyd1ZT9ie8o5NP8BkKsC/yQGutOI80GUX23AfxKrzEuA/6b5VyfWJjoD/h1X1tW9s4l28pKUK98nirGlatqA44jB33GJY2mkocCZRBLzDvGzN/ugtxmtAFxLub7pQ2nDaahRxM8+Hdg6cSzKrnWJbbPrpw5EknprBLFf2jMEVM0w4DfEc+R9iWNJ5WDijJDSoZd5Ouum1e1JbKUqECsRa6QNJ4kOoqj/HaLGS6rmRuB/UgchSb11KFHTMCh1IMqcVYCpxPkorTjwq7Q6sUpZ6lr10aTRqCejiCL2JcT22JPwNe4gos3yufi70Lt9kFihHJE6EEnqjb8C300dhDJnN6KQ/So8ob6kAziB6FJWAH5PJHnKln2JSZkCccDqlLThZMp7ie1k19Mc7c9VO4OI58YnUgciST1ZgzjEzX2vKhkCnE4M0k/E1prVbEycmVSqqfgCzd/UIA8mAr8jHpeFxJlWzia/20RiNXE6MVEhlZxOtBiXpEz7Dhbvq2wNok7gOWCHxLFkXTtwONGVrdSpbM+kEbWuIcCXiWSyAPwLOyr2pINyY44zsdWywrrEpOaaqQORpK60AU8Bn00diDLhEOAt4jygZj0box5WAy4jBs6lc2NavV6okT4MPE387t8Ajsb6jr7YmjjM80FiZVG6C/hW6iAkqSu7AHOIYle1rtHAJcRKwuGJY8mz9xJv/AVi+925wMpJI2pu2wN/IX7fi4lDeMenDCjHRhGdBucSK1lqbYcTtWOSlEk/J/aLq3V9AHieqOdYK3EszaAd+BzlAvK5wI9xYF1LOwF/o7zidQNx6rwG7pPERMbVRJ2MWtMYYhJmi9SBSFJn7cBLwIGpA1ES44lVl3lEof7gtOE0nWHETHbp/JE5wFmYJPZXO7AP0TGxlLjcRqzCqLbWJIq4ZwGfxyYereom7E4qKYO2Jwavbh9rPR8BXgPuADZMHEuzGwEcC7xMDLoXETVG26QMKkeGEita0ygnLn8Fdk0ZVIvwdaK1HQk8ljoISersDOIMC7WONYE/EzOrXyZmtdUYQ4gmCQ9RHog/THSBWi5hXFm1HnFy/KuUa1yuA7ZNGVQLWo6o5VpAPB52KmsdE4gJl41SByJJJW1Em9xPpQ5EDTEY+BpRj3EV7m1PqY047fom4oT4AvA2cD6wI62dVI4BPgPcSTnJe4No8WtL17T2Ap4lEvDt0oaiBrodu5FJypDNiUPenPltfh8EHgFeBD6UOBYtbRViBeY5ygP2F4kB+/a0Ru3BcKK25SKiTqj0e5hKdEIamS40dbIMsXK/gKifWy1tOGqArwD3pA5Ckkq+TsxyqnltAPyRWHU5DWudsmwQsC/wW+IcntIg/jngbGA/YvDYLNYCvgj8gXh+ln7eJ4ii4fXThaZeWJ/yY+drS3PbkFgptouipEy4BTgpdRCqi+WJGfwFxOGKqyeNRn01nDiY8TJia1lpcD+f+Lv9JrHVbHiqAPthIvBR4KdEUXCh4vIUcDq2a82j3YAHiEL/LwMdacNRnTwPfDx1EJI0gug+ZjFscxlCDCJmAf/Ex7cZDCUGiT8iCv4rB/4LiEMzf0yc3TGZeA6ktjxxQO5RxMGIz7B03O8ANwLH4EpLMxhEbPV7hXiO7pk2HNXBr4ALUgchSXsCs/Hcj2YxmGgz+yzwNNH6tBVqJ1rRasAniG1lDxKduTonNQ8ClwLfIwaWuwNrU9sVm+WJOrr9iT3y/0McJjm9UzwFYkvcTcSK7+7EBIqazxgi0Z5PdItzRa15HEj8bfu+0uR8gNWVwURbwuWLlxWKH5ctfn1s8eMQYs/pouLH2cX/n00s1c+suLxCDGL64kfEtqID+vEzKDsGEYPZE4lC59OIge38lEH1YHfgMGASMeCZRzQYOAf4S7qwcmsMsDWwGbBJ8bIhXa/EzKX82vEasVoH8Ti80+m6o4mOaIMpv2aVLoO6uP+ZxJaiB4uXe4v/XtS3H0s5tjbwbeBjRA3et4F/JY1IAzWWeL2YTHSh64vhxLinNN4pXYYTryOl8c8I4vUJYiJmDjH+mdnp8howo58/h3pgAtPaOogX8E2JIuo1iWRhdaLjUK33CC8AXiC2aTxb/PgwMWh4lpgB7WwLYmAytcaxqDHaieTzFKKL3M+IpPStlEH1YAyxKvA8caZEqavNWOAEYhb/98DBxBtXThVOAaZC2zUJgxhM+XVnjYqPK9G7JKQrlcnPq0S3tMrXnaeIwzoTKvwMuAza/po2DhHPweOIttg3EInMvUkj0kAcQBwgWy15GEacFbMpsC5Lv/ZMqEMsc4nXnNLrz1NEYvUvIsFRP5nAtI42Yg/3FKLeYDPij7jaVo2ZxB/aC8XPZxQvM4lC3YXFjxAzoR2UZ1FHFf89infPYkwiXiiqdYF5k/ijvo/YK39nMQblUylxORkYB5xF1D+8mTKoXuggkpOziIFMNb8mDnm8ktgCVy3xzoHCdOBL0HZ16kh6YXTxMohIeiq7nFWu/M4tft55hSaDCtcCD0PbN1JHov9vA+AbwEHE3/+JwP1JI9JADCYmQbcF3kskLevw7gmRAvASMeaYzrtXUkqrvvOK13+L2EnQXryMJsZYnVd/x1OeFK52qOrLxOrvPcDfi5eZ/f9xpeaxEfBV4FoiAanc670IeJToHnQCcebGpjSmxeRywJZEp59TiAHj05QPyitdphfjO5JIfJR9pRPcHyZeiI+nvOyeB0cSrXG7sz7l5+hBdY+oLgobQGEJFMaljqR1Fb4KhbtSR6GqNiEmKBYR7cKtkcmHQcAOxLjiryzd9rzUlOMeosj/K8D7iVWYaslFLbUR3Q23Aw4lJvNuJlaHK+NbAkwDziNqeTz7Ti1jBPAfxLaXysPmCkSmfxWR0Ewhm21NRwE7EwnVH4lTrSt/hkeIP/zdsag/a8YAXyO26rxMPIZ5PG/hBmJb2HndXKeDeCMsEIObHCocCQVnlpMqbAmFhVDIU4I/UG1Ex7e/EwO1J4j3pf1SBtWNzYhJtEVEW/AP4M6VrBkPfBa4nHePGR4jkpXDiAndvm5FbYQVief/94E7KL+3lCaa/06sBJpEq+kMI1ZPfsvS5zC8RaxsHEHUueRRO7AxkXTdRCzfln6+GcQgc3ey+aLUKlYiOja9QQxGvkw2k+Pe+gflmbDukuSnitf7UyOCqr3CpVD4SeooWluhAwpvQKFVWvmOIBp3HES8tkMMPr9P/C1dR7k5TNasTjQemQU8Tv5f5/JuHNG18GZikF8aF/ybSFg+Smxhz6OhxCrSd4nVosqdKU8CpxINCqTc2g64kNjvXXpyvwCcAbyPbJyxUGsjgX2IxGUm5Z/7VeLQuU3ShdZyJgMXEfVQdxC1IM1wMNyBRB3Wf/dwvTnEc+/UukdUF4XpUPhQ6ihUuBYK30sdRYOcQdfnPX2V+Hu6uXHh9MsoInl5geiqeRJR66D6G0Icmns98b5Tev+/n9iqvDnNuTo2AfgUcA1LT+I+RhwUvFK60KTeWx44mqUPiXuJGLxvT3lWqxUMBvYiZlsql43vIjrJjEwXWtMaTBTm30a8gVwKbJU0ojQmUX6+7ZU4ln6w/iU7WqYOZm2guxW/QcQKbgHYtyERDcxQ4n3mIWLnw1nEFiXV3rrA6UTCWHrdfZDYVrVewrhSGE3UmP6BOIKgQLwXX01sb2ylMaByYm3gTMqzvouJLVUfwe1TEG8mHyF+J6Xl1jeJ39mqCeNqFqsR3cReIla+fkxsqWhVJ5Dv7WPWv2RGy9TBHEfsGujOKcTf1W/qH07NtBGTGNcT78u3E+3V610U3gq2J7YVVr6nn1v8f8V2y8OJYyhKid3TxN/a6IRxSUAUtFf+Ab9IzDqsnDCmrFuX2FNd2mK2gNjq5J7RvmkHdiMKWBcS5/AcjieTr0nMuD5BnJWUQ9a/ZEfL1MH8hqgfOaqb63yCeM3+v4ZEVHsrE4PHF4hdAefiqkxfDQY+SWzjLQ3K7wE+x9Kt07W0HYCLKa/KvA58D7eXKYHtgFtZeo/n4UTBvnpnJPE7e5Ty7/Em7OTRkxWJN+GnKc94mfyF7YlzAi4jnx3Wiqx/yZaWqIO5gngN7u7Q1P2L17m7IRHVzxCW3hFQqhG0c2bX2onfUWkb4RLi97cPzVnXUi8TiLqs0gTufOI9fGLCmNQitiVaNZYG3LcQs+Dqvw7ihbE0o7OEGIBumDKojBlCtN++ilixuhf4PPk6v6UetiC6+/2V2H/9L2DrpBENmPUv2dMSdTCrAccQ9WNdOZZ4jT6/IRE1xvrEltuZxBbc72OzmUrtwMeJgvRSPcf5xIGi6r9lib+3l4jf6xziuefrvmpubaIIq5S43E5sH1PttBEF6A9RriP6BdHGs1VNAX5O+bThs4nTivVuw4mE5mJii8v70obTX9a/ZE/L1MH05I/Ea/PBqQOpg+HE1qibiPee+4iDFFt5i88uxO6S0nknF5HfIx+yajjR4a90YOabxA4La7Q0YKOIrLjUGm8qsEfSiJpfacbnSeJ3Pgv4L5qz7XQ1qxIvYI8Rbxo3EV1NWr22pS9OJJ47vyJ3XV+sf8melqmD6c6qxMB+Gs3Rjr07E4lWzPcQP/MdxHbnVqnxWItY7S9N2F6OKy71tgzRcnkW5fNk3EasfvskcWJ5qRXyoeRuMJRrQ4mT40vn6DxOHIrZjJYjtoTdQWyh+zvwBTy/oL/aiIFWgZ7Pi8kY61+yqSXqYLrza2L76g6pA2mwzYktZv8mZscvAHalOZO4YUSnudKE7T203uOd2nhi50np8M+bgXWSRqRcWR24gXjyzCM6RbT61oGUJgC/JGbCCsThoM0wsF+OWFm5jijke544SXrdlEE1kSMpb0V8T+JYesn6l+xqiTqYruxG/C0dkTqQhEpdHy8C3gJmFD/fh+Yo/t+Rcp3Ly8BnccI2pcnEeW4F4B3g63gkh7rRTiwbv0U8af5G6x3ClGVTKNfHvEKcyJ7CeKLl6Cf7cdvlKSctC4iOWWcS3bPs5FJb21PeAvHjxLH0kvUv2dWydTArEwPar6UOJEOGE4nLRcSqzEzKyUx/tjpfSSSHKRKGUURt5ZLi5TxgTII4VN2niGS5QNRlbZk2HGXRqpSz3dlEIuPsQ/YMJupDSkvcVxArGY3QBnyaeMMq7YvujRUoJy0LifbHJi398xHiDIdb6PlNdjXKCcy1dY6rRqx/ya6WrIMZDvyT2Juv6oZRTmZmEefLXES8VvWmbnFVyueh3U1ja022oVxr+hR2VM2q8cRzqtQF7iSacwuj+uFA4kWnQGwd8xDK7NuI2J9bILZe7VTn77cm8BfiTaY0KJ5P151CNiYSrTuJZOch4kXH9pwDcxPl3/9ePVx3s4rr/rbOcdWI9S/Z1lJ1MO1E581vpQ4kR0rJzK+JgwpnA5cQTWm62vZ8CDCX8uB0EXA69e1CNZiodVlErLr8iEhWlW37UK7Lvp0od1CLWoZyVvsOserijHh+DCFe6BcXL6dS+z2ig4CjiRWf0gm6pcsiYhUF4sX/A0TL42eLX/s7MXO5fo1jamXXUG7oMLKH6x5A+bE6vs5x1YD1L9nXUnUwZwLfTR1Ejg0B9gTOISbZFhGDzq+z9ETWhZQLtkuXecXb7FKHuNYkVnoKwHRcdcmb8ZTbmc8CPpY2HKWwNvAA8SR4GE8yz7NdiG1FBeJAwxVrdL+TiedIabta58s7xBa2y4htZW8T28QOp7XPDqinY4D/6eV1T6B8MGoOWoBa/5J9LVMHcwzwg26+/o1GBdJE1iQmSW8iVvJfJiZQX6f6+0tpYu4SardNeq+K73cVzdEMp1UdQhx+WQDOpXWOmWh5+1DutX0BLp02g3HAnylvKRvIaewjiLN/FvHumbHOl9LpuTthh5BaGE4cHvcZYjtGZ2OB54j6lp7uZzrxGF1cywDrx/qX7GuJOpj9gTN6uM45jQikiY0DPgH8nu7fX0qrMTOJAWt/tRFbARcTydNRA7gvZcdmlGuY/kbtJm+VQW3At4kZ2flEm1U1jw5iG9kS4kX/0H7cx15Ev//O28W6unRXB6O++zbl321Xg6S9iJqijbu5n/8u3scjwOhaBlgfhTYo/Nv6lzzIdR3MIKKu5RHK218rbUPUQ3RnMlHPp4H7JOX6l+4uS4j6mBuJov++WIZyovQS1R935ddYylvKpjOwyVtl1BCi9W0BeJF4oVZz+g/Kh1+eQu/qmsYTS/WLiTeL3iQvnetgNHClF+LSFoeubE80SDiDmIUqdQxclWgDWiA6leWknsT6l/zIdR3MznQ9QbAW0Z7+n8BdVS53E+eELAQOaky4Te8Cel7lr7wspFyv25suqROBe4u3vaP4bzWfdmJSYQmxK8SJyfv+8QAAIABJREFUsCYylnKL5Afo+wyG8mdjopi+AFxO99sEv8nS3cX6cplL1FqoNrYGphLbxLbq4bodxGFrlxL98V8kZqCuBParY4x1YP1LfuS6DmZZYhb/Ud799/Unev+6t0WD4m12pW2u/bm8BGzYzX1vTLyOFog6zWpbctVcDiDGJEtwlbQprEYslxeIF+g8vumofypnn/5K11uJ7iK2nHWeCZtPzGbMKX59MdXfSHp7HozUBetf8qMl6mBUf6XzX7pbaZlDDEgXdvr6guL/f7qL+96V8i6Ek7G7aivZFniNeOzPJPFj7xOv/9YGbgYmEV0avkQMUtU6lgF+R7Q4vodoazmjh+svTxw+Oa74+fIVn69IdBhbgVjZG0XUzEyqT/hqfoU2Yjb1C9B2depo1BuFa4GHoc1uXOqvHYmJtXeIc+hmEgPPl4mtfDMr/m9Gxb9nEglMVz5I7DroAI4Azq9P+MqwtYkJ+3WINt2fIyZglRPrE1tKCsBpiWNRWh3EH3EBmIYHlSpTrH/Jn1zXwah57Uf5zLL9E8eitMYD9xPjnmuw2VBuTAZeJR64HBxgpwbooFzc/TjWQSkzWqr+ZTRwK93v3c+BXNfBqDl9kthhMgfYI3EsyoblKB9aeh0mMZm3IZG8LCEO45JK2oCfUE5i7JmuDGj6+pcJxIDqW8RWuQKwZdKIBsw6GGXKgUTy8iaxNU0qGUVsVSwQLdQ9ry6j1qLc1ePYxLEou35IPEcexJOIlVTTn/+yE3HA2q+A3YlDX5sggQHyfR6Mmse+RE3MHOLvTepsBOUkplQfpQxZBXgat42pZ21EU4cC0X53bNpw1Lparv7lezRPAmMdjFLbjWgCMJ844Ffqyijg/4jX3wuwQVhmLEe5VfKpiWNRPnQAvyWeM7cSB51KDdZS9S/QXAmMdTBKaUvgbWL1ZZ/EsSgfxgEP4Vg5M4YCfyEekLPThqKcGQzcQDx3fo0zEmq4pq9/6ayZEhjrYJTKJKKebAnw8cSxKF8mUj7g9HOJY2lpbcBFxANxPRYnqe+WJbaRFfDkWjVU09e/VNNECQxYB6MERgH/Iv6Ovpk4FuXThsTZQwuB9yeOpWWdRLmOwWV89dcqxJlBS4hWlFIDtFz9CzRfAmMdjBppEHAT8Tf0y8SxKN92I7YfzgY2ThxLy9mXGHC+iAcTauAmA28Bc4HNE8eiltBy9S/QfAmMdTBqpB8Qfz83EVugpYE4lPKxEmMSx9Iy1gFmEdnjdoljUfP4DyIpfo4odpPqqOXqX6D5EhjrYNQoHyLen57F9yfVzv9QPuiyPXEsTW8kcX5HAfh84ljUfEpnxNyMvdJVNy1Z/wJNl8CAdTBqgPWIrT7v0FR/O8qAwcRZXQXgxMSxNL1S0f75qQNRUxoE3EY8x76VOBY1rZasf4HmTGCsg1E9jaB8TMSnEsei5jQR+DewGNg1cSxN6yDij/hfwLDEsah5TQBeJjp0bJM4FjWllqx/geZMYKyDUT2dTfzNnJc6EDW1nYkE5gXibEXV0CrATGAesGniWNT83k/sN36KaFsp1VBL1r9AcyYw1sGoXvYi3oeexE6rqr/S6/NVqQNpJu3EaekF4KjEsfw/9u48XI6qzOP4t7MnZCUbISzBsAWURWAA2YICgrIMgrjgiAoziLghOoorggIu4L4CyiA6CDpAEAURSEBBQTCGVZAYwpKdbJA9OfPHW0X1vel7e6vqU3Xq93mefvouVdXv7dvVfd465z1HyuPbaLiipK609S8QZAIDqoORDIwjGQmwv+dYpBz6AX/G3qPf6zmWYJyDPaG3odXSpXMGkiwYdoLnWCQYpa1/gXATGNXBSNr+DztXzvMdiJTKZGAFNmnEtp5jKbztsfU5lmPDyEQ6aU9suu7ngRGeY5EglLb+BcJNYFQHI2l6K3ae3I9mw5TO+zD2+rvFdyBFdyv2RP6X70CktOLFw77jOxAJQWnrXyDcBEZ1MJKWEdgC3evRosriRx/gj9h79SmeYymsd2NP4Aw0dEz8GYIV829EC6dKW4Krf+kH3IBN83pwA9tfgr2n75tlUH6oDkZScSV2jlzoOxAptd2BtVgdlmYla9IIYAG2cNPOnmMROQL7UPkb6tKXlgVX/zIVOy8c8MM62/bFVnt2wHsyjcoL1cFI2w7EZh17HKvBFPHpi9j79Xd9B1I08YroX/IdiEjkejScUdoSXP3LMGxylSeA/Wr8fjI2jn8mdiVvFfASdmHqKeBB7LwKgOpgpC0V4C/YZ8wbPcciArbe4r+ADcBrPMdSGJOx9V7mozU4JD+2BV4GFgIjPccihVTq+pfAqQ5G2nIalrxM8x2ISJVTsNflHb4DKYqbsCfsNN+BiHTzJey1+XXfgUjRBFf/IptRHYy0ZCg20+VaNGRe8qUC3I2Wk2jIwdgT9QA2E4JInsQfNGuA7TzHIoUSXP2LbEZ1MNKSz2Dtnst8ByJSw2uxSYweQzXAvboTO5GP8B2ISA8+gL1Gf+Q7ECmS4OpfZDOqg5GmjQCWYOvdjfUci0hPrsXaPaf6DiSv3oA9QX/0HYhIL/oDs7EFLl/lORYpDNW/hE91MNK0C7B2zwW+AxHpxc7Y2kRPYdPnSzfxOLupnuMQqecM7LX6U9+BSBGo/qU8VAcjDRsNLAeWAqM8xyJSz9VYu+e9vgPJm9djT8ztvgMRaUB/4J/YFYlJfkOR/FP9S3moDkYaFq+z8VnfgYg0YEeSXhjVwlT5LXYiH+47EJEGnYm9ZjUsSOpQ/Ut5qA5GGjIYWASsQNPyS3Fcg7V7TvIdSF7sgs1w8KDvQESaMBCYhy3MN9pzLJJrqn8pD9XBSEPej2Yek+LZA9iELVAswBXYifxO34GINOkL2Gv3U74DkbxS/Uv5qA5GelXBpqTdgCaCkeL5A9bueZ3vQHwbA6wG5mJ1BSJFMgZ4GXgOzcwhNan+pXxUByO9ejPWALzWdyAiLTgae/1eV2/D0BtF7wYGAT/AioNEimQx9iH0PuxD6Sa/4UgOTQVmQWVxBx5rC2BA9PVwkkLLeIajPti6E2AXjIb2cJx6MyJVH6cn/YBh2PDgFXW2XY8NxezNWmBVjZ87YFn09QZsPY3u26+Kvif6/YY6j9Wu6cAlVgdTWVlvYymd/4ruv+M1CpHW3AY8Cfw7MB5Y0NOGoScwp2MfJv/jOxCRFl2BJTBnoARGzDAsQRgJi4+DFY8Bh2LJxcjod8OwQt5BWLLRH0sKBgJDsOSiP/WTj0HRcRpRnSiswXq/u6tu7PfkJepfcFqOjZWG5O/sTXXy1ZPqpKzaCOz5qdBcQXQcY70kKP57l1b9fjX2HK6Ifrc82X74Kli6Gu45Cbgj2r+R51XCNwF4E/AEcK/nWERa4YCfAJdgnRBf62nDSqci8uBQYAZwI6Dx4VJkfwd2x8Yzz/UcizQvTiKGYQ3g4VXfD4u+HtXt+2FYw3kEllAMZ7NkogK8AHwAuKG6ERwnEnGjdnn0sxUkicXK6Gdxw7qn5ONlbFFVov03Rl8vje4b6QEJUdwDBElSSHQ/MPp6GMlFwlpJUJwoxvvU+l9XJasMwBKxLWDaAHgUOK97XN3/1yujr5dF9yu73Vf/PP7ZcromiFIcn8MWrTwHzWApxTUeeBZr7+yEJTWbCTmB+RnwLmzozW89xyLSjo9gH0afBy70HEsZDQO27OE2giT5qE5KRmCNzrgBWm05XRuSK7GEoHtDMm5grqfmVfk7x8Ph9wHjOjSETHLDnQvr3w4D3kTvvW3xa3IUmyfNw0leo3HSVO1lek524tfni91uS6u+Vo9QZ/UBZmONv22AJX7DEWnLr4G3YMOkZ9TaINQEZgg2bm4FsB3JVUORIhqNTan8NDDFcyxFVaFr4jGK2glJrZ9XD7VdRtcGW7zSdfcr23GDr3uisozUuLOAM6GyV3rHlGJw+wB/BrZMsQ4mHno4EkuAuifl3ZOgEWx+rlQPN3yZzZOaerel1K9XktoOAe7Gip/f5jkWkXYdC9wMXE5S19VFqDUwb8a65q9AyYsU3xJsasFjsHnSZ/kNJxf6AmOj2zjsqmP8/VZVX4/FEsDqwvGN9Hzl+Okefh7f8jSs5jCsoFvKZybW0D8IuDWlY64neZ23ajD1Lw5MAvbp9vPhVcdYVxXHImA+sDD6eiF2cXJR1e+KPITxNVj8z6RwrFOi+/9N4Vgivt2GvQecCJxNjbrIUBOY+ESuOw2bSEH8Ektg3ka4CcxwuiYf47HkJE5S4t+Nie7jHuTlWENmETZz23ysgRd/vwR7I1yCJSTLO/LXZMpVsATmA74jER8qG8Hdg70G0kpg0rA6ur3Q5H79qJ30VF+QmEzXixXxkLe12Lm+ILotjr6fR5LoLCBJgvI0tO0qYE9stskvYcX3reiDDbdZiTX8RIpuPVbD/j7g9dR4XYc4hGwI9kb1IrA9PRT/iBTMcOxD+AXsg7wo+mCNjgnAxOh+66rbBJIGSVz8vIauV14XR/fdk5S4QRIXmZeIm4JVcav+pbTcucDJUDnQdySeDGPzCx7jSS5wTKDrBY8+0X4rSN4/XsASneer7udja291YorqF7EeqHXYEL6bgfOBvzV5nMOBO4FrgP9IMT4Rn47CEpefYLMKdxFiAnMS8CvgMuBcz7GIpOkm4Hjsil0eemHiq6Pb9HA/EWtQ9MOGXsUJWHybF93HV03jJEVj4OtS/YtkUgcTqj50TWYmYBdW4osoE6vu4+Fs8SLCC7AZkRZE38+vun+e2usHNWJQtG91O2xDFOuD2BRzdzR4rG8DHwJOAKa1GI9I3vTD2gkVrC2xsfsvQ3NMdK+TWELzGyyBOYZsE5gBwLbRbTtqf8hvRdJjEice8Yf6TJIP9/iq5kKyX+CvTFT/IlnUwYQqvoDS46J4VYbQ9WLMeOy9cCtg36qfx1NnLyd5r6u+QPM8yVSw82s8ziQ2v4gct8lei115ngl8EXvv7200yTFYz/UfGvj7RIpiA/be9i7g34D7qn8ZYg/MXGx2lDHUXwxNpEi2wV7fM7AhA60aS5KgbB/dqr/fCntveAkrLu1tmMV8SjmEyydXtQBM5Qbf0YhPbhrwKFQ2WxBGMjecrhd1ug+P3Sa674/V3cwlSWieifb/IJtPs15tU3T7J3Ax8HM2n5joVdjkI7eSXMAVCcU7sdf9BcAXqn8RWg/MHlgj7P9Q8iLheQ6rezgIS9JrFaMPwJL3CdgHW3yLe1F2IhkisRRbN2A2lpg82O37eaiGLI92xa4K3+M7EPFuBnCy7yBKKp4u/fE6241i8/fh3YD9qd8G6xPddsFmVf0ythbYT0naOG+O7tULJyH6PZbEH0O3BCY0/401uP7TdyAiGfkq9ho/Mfr+aOB6bCz8C9HvHPbB+ghwC/AD4NNYcechWC9Lb1f9JNfcWeBm+o5C8sDtA249uGG+I5GmXYz1zLgWbquAt0bHuSX62S4djF2kk/6M9TyOrf5haD0wh0X3Ggcqobod+AS2Ou0N2JSlT2P1EM9gwxPmkuqCiZIzqn+RmOpgimtHrMe8ERuwBtzA6OuXsItQfYGDsff8f2QQo0ge3I71WB4K/NpzLJmoYOs8PO87EJEMDcWGDvzVdyDig6uAmwfuxPrbSjm4aeAu9h2FNO3v1O5dWYfNgLYJu0D1EDbL2LuB3bGkJbZXtM/POxa1SOcdjb3OL/MdSFZejf2Bv/QdiEjG/oYlMUN9ByKd5qaA2wRujO9IJC/cueDuq7+d5MyLWG/Kaqztshq7MPVNbNalKSRr1/Tk7Gjfs7MLU8S74di58hffgWTlTOxE/ojvQEQy9l3stf4G34FIp6n+RbpTHUxB/QT4Bo0nK7X8Avss0HpQErpZWO/kkHobFtFPsRP533wHIpKxd2Cv9c/6DkQ6zV0L7pu+o5A8cX3BLQV3tO9IpONmAyvpOqxMJEQ/wNo9h8Y/aCXjz6u9sC6mv/sORCRjD0b3e3qNQjrMVbAC/hm+I5E8qWzEptQ+rN6WEpQR2GKYf2PztWFEQhO3e17pbQwlgemHdcH+A5uWUCRkT2PTaO7hOxDpKK3/Ij2Zgc1MKOWxBzZ50cO+AxHpgPh1/pr4B6EkMLti0wvO8h2ISAdsxBa03BHYwnMs0jlTgVlQWew7EMmd6cC+qoMplfgClhIYKYNHsJn5gktgyn4iT8Z6nl7GppJe1O22LPr9pd32G4PNfLIKmxGl+35LgTXA/2T+F0izHsbO3918ByIdo/VfpCfV68FIOcQNuTJeuN0W+ChwIzbN9GLs9f8ocDmwb419TsAWD83KucD/YT3kTwLLgQ+1ecxTgGuBu7C/bTFwTZvHLKqXsZqvVxPlLqEsZLlrdP+I1yj8eRrYChte8lqSOeHXAv8F/AlbmX1Jt/0WAxOAcdgqvtOin28EPogtCLoi2k7yJX6tTwEe8BmIdMIr9S8f8B2J5FFlI7i4DkYLWpbDlOi+TO2eY7DJaw4E5mOzsP0c+BcwD7soewjwM+AW4FNYbfT+wP8C78swtlXYOoS7AjtFP2v3f7MBeBZrp8UXKx9t85hF9gg28mR77H8ehJ9hsxPoarRdgYsXxLq+if12rdrv9gziqqeCXb14Bhjl4fGL5gTsf/UF34FIJ2j9F6lH68GUzFxspEQZ7IldUHVYA/5Yep95bQBwFfBDYAdgATb8aFymUZoPYXGuwkob0vBakvbZfikds4guxZ6Dw30HkqY/Yn+U6gHg0yQv9NOb2O+sqv3+O4O46jm46vEnenj8otkDe65+6jsQ6QSt/yL1aD2YEumPXZ2/33cgHfCf2GiS9VjbpH+D+/UBHsOSPEfnZqi9Nnq836d4zDgpWkq5p8z+IPY8vBfCqYGZhGXYL3uOIw+qM9M/NLHfkVVf35VSLM2IF2WcjXXDSu/i7tNJPoOQjlH9i9SjOpjy2B5ryAYzjKaGfsCPgB9jCxgeC3wVS2QasQk4HxtWBs21h1pVAV6fwePF7aO7KPeU2XOi+0keY0jVQOwf+mffgeTAACyJc1gRWaP6Ypm9wwrPfGT406PH/7GHxy6qJSQntATLVcDNA3ei70gk79w0cFkWKks+HIF9Xn7FdyAZugr7G9fSelJewep4HVY/k7U9SUaSvDalY/bFJmJyqAZyd+x5uBrC6IGZgP0dz/oOJAf2B4ZEXzeT/e8LjIy+vofOZ/iDgAOir+/o8GMX2Vxga+xNWsKl9V+kUVoPphziYdahtnsuAk6Lvj4bm4ioFQ74J9Zr04n3z7in5EWsRzQN+2CLloLaR/HrfSKEkcDE3YNlKWbrTVGHj70O60lzwJ0eHr+oFmPjgYf7DkQyNRWt/yKNmY7WgymDuN0T4nvCW4Dzoq9vAa5o83j/wkbovNTmcRoRJzB3YkPY0jzmc9hi7WW2AuuRGwNhJTDdpwguoziB2URzicgRVV/7SGDiuB9GiWgz4te8ZqYKm+pfpFGqgymH0dF9aAnMUOBb0dfrgA+ncMy+dKb+pR9waPR1mj0lcQJT9t6X2ItEr/8Q1oGJT+SyJzCDsLnRAf6K1bQ0Youq/ZbRerfnGOBU4ERsSFMFy5TvwQrv1kTbjcKGPg2tcYx4Zq3YveiDuDfxa340thaQBEfrv0gztB5MSYR64fZ8YJvo6xuwSX3a9QS2REOjGm3LdLc/Sbum2YTpzcCZ2Hp+K7BG+n9ja9vEbaBOJGFFsIRknZ3C+zDW6P0P34F4djhJ8diXm9jvmKr9bmrhcftgU9stw6YPrH5hbY8lU49gC2XGRmCN7tHAZKzHyGFjXkdX3UJIsLP0RTpXnCheaP0XaZbWgymBX2Pv/dv5DiRFWwArSdojR/S+eepaactU+zwW9zNNPOZWwO+AhdjQudgeWN3PsSTPx9ZNHDdkdxHQsinxuif/7jsQzy4keaG3evtok485ELiO3teO2RkroLuf2kMWj4v230TzQ6FGYGNNy7qA6cex5+4U34FIVrT+izRL68GUwG3Ye39Iiz6/i6QtMpvOTk6TRltmRrT/lQ0+Zrya/BLgNTV+/26sJyZevLOW/sAZwI3Ar7Bk6G7gkwTSwK9hGvacbOU7kDTEV6Hf5DsQz+LFPNdgmfroBm8Pkrxp7NnE4/UBfhPt98M62/452u49NX53Gc0tMjUeOAq72vFCtO8+De4bmrj38V2+A5GsuGvBfdN3FFIkri+4peCO9h2JZGY6AV2FjvyOpC3Syemh02jLDMGGmTngHQ085nBsdjQHvLOXbeJjfqvG70diPXHH03Xpi0nYAp7PEdB6KVWuJ6Dex4uxP+YN9TYM2BCs4M1h2Xcr+y2huUkdvhTtNwe7etGbX9DzqvF/i373jQYe8zDs77sSmzntq5Q7gTkT+/tP9x2IZEHrv0irtB5M4O7F3vtDGmb9HEkCc1IHHzeNtswbSWIf38BjXhNt+8dethkBbIi2O77G738CbNvDvgdE+81qIJai+Tn2t+0Uwot/QHS/zmsUfh2MdSVCcwnMAVX7zaDxaf92wboowU6itXW2j8dudu8m3RIb6wmNzX42g2SWD0hWvC2r+DVf701Xiknrv0irZgAn+w5CMjMA+7ze4DuQlAzA1vSLPdChx02rLRNfQH8YWFDnGAdgkwSALdbZk6lYz8pG7HyuNgTr6dkzOt76br//KzYa5zVYLc9TdWIqkvh/NDCEaZTjxlv3f2CZVK//0kwCc1jV181Mn3wBduVnE72fgLG4RqX7mNapWK/PJpqLW8wrJ7LXKCQrU9H6L9Ka6Wg9mJANIKw2z9YkI0DWYDOVtms09YcZpdWWaWaq489E9+vpfXa0+JgPAMu7/W4U9rm/F7VrhzeQzETbSI9QkcTtnkEhJDCu/ibBixOYjUAzs89U92Y0msAMI+nOvIP6bzRjgLHR191n54jjfgib+UOaE7+J6hwIk9Z/kVZpPRgpkpVVX7+Y0jG/COzYy+/TasuMxhIJqD/V8XBsuFm8bW9/6xuqtuvueewC11RsquXu+mGJSzP1xUURt3s2hZDAxMNoBvS6VbiGkdSAPETXN4LeDMC6HsEWw+pplovuDsXWnIHG5iWvTpJu7/a7OIHxsXhmCOLXfL1ubymcV9Z/6T50QKQBlY3Y0MPD6m0phbSOsNo8L5Ksr5LGxATDgNfRe9sirbbM4Vjv0QbqjyQ5nGTY/p29bLcVSW9PT7HdTc/Diw+JYppG423CoohHnKwNIYGJG28hnczNOISkkK+ZYVj7kZy802n8Kv72VV83kvTExXhr6dpdOg7YPfpaCUxr4hO5zPVfoVL9i7RrBnaFVsKzFrsS3b/ehgXhSHqbR0S3dpwPfJve2zVptWXinpK/UD9ZmFT19cO9bBcfczXNjaqJfRobdvahFvbNu1faPSEkMGXvgWm1/qWV4WPd1TvpxwInRF9/m67FbVOj+w3UbqR9BtihxbjK4pUrEV6jkCxMRfUv0p7pqA4mVCG2e35Z9fUbe9yqvkOwoWNXNbFPO22ZWkO9tgA+UuM4I6u+fqSXx4uPeTddL1B+uk6cfbBZ1SYBBwLP1tm+iF4ZeRJCAlP2QuY4gak3JV93rRbwV590L9TZ9hPYiTwb+HK33+0b3T+AjdWuNh54M7bIk/RMQ8jCpfoXaZfqYMIVYrvnZ8Dj0dcfa/EYr8YK8xtZWiCNtswEbJYv6Dok7ARqLzL6dHS/HKtjqWUAEK/hVH3MHbDkrLtXAV/DFuJ8HHgZm33s8RrbhiComYfPpudFEkM3Eivcb7ZQqy+wItpvfpOPORw7+RzJiVvLgdib7EqSqZKrxSsJf73G735CY1Mkx2sAlXUdmHj++qN8ByJp0vovkhatBxOoeNX40EYp7Ic1wB3J9MaNehPWC7Jlg9un0ZY5jGT9l+qE5bfUXhh8r2jb3iYt+mjVMY+p+vmngff3sl8FG3p8DjaN8jG9bFtkd2PPzeAQemCWRPejvUbhR1w8BvCnJvbbFytyg+anL14BfDb6+pQettkdGyO6NIqx1mJK8dCYRd1+Hq8q31uBm5h4+sQlvW4lRaP6F0mL6mDCFGq75wGs1mQFcAlwEfXrfPYFboz2O5HGZzFLoy0Tt19WkUxbPA77v9S6qDwTm/FsBNZz0t1pWMIUH+u5qt+dhCWuPXHAE9ii4J/Ekqiv9LJ9UY3Bnu/VvgNJw5HYP+4i34F0yCBshooDsCw7ztR/gWX840iSmmpDgcnYzBx3V+13C7A31mDq22AMfYAfY1ckjqv6+WCsR+wl7KTfppdjHIzNvX5F9P0A4OPATTTeLV72Hpjrsb9/kuc4JFXuLHAzfUchIXD7gFuvOpjgXIi997dTK5JnO2IzfTlsyuILsAb87tjq80dgNSa/Be6n9eeh3bZMhWQSpDHR9/8TxdeTKdjIl1+TTMA0HGvPfDOK6ZLomHFMHyZJthoVP38n1NuwYBaw+ZIchbU39k/6ke9AOuDVWNH7KqwLcmnVbRnW9boROK/bfttG+63uZb8NwFebjOdY7CS8E7uq8GfgByTTMzey//3YG8B9wAdpPIkCJTB3Yn//UN+BSJrcteC+6TsKCYHrC24puKPrbysFEg8zOrXehgW3H9YueQBruK7DGv8PYsPPa9WEtKKdtsxo4KfYMhb301j9zXhskoF7gd8DN2N1v7E+2Do2j0W//xKbL55ZT1xe0cosZnlVwV4DD/kOJC3bYf+k3lY0lTCVPYGZRTJ3vgRB9S+SNtXBBOhd2GdfrZmuJHyHYjO1btXLNm/GXiNrCGe2upHY33Q7JN1XRTYP63XY1ncgIh22HV3HyErxqf5F0jYDONl3EJKqeAYrtXvK6VfY1M7j6DmJ7Vt1H0K9OyRr9zwLYfxR67GTObTZOER6MworBJzjOQ5J11S0/oukazpaDyY08RIDaveU0wvYBAK39LLN5Oj+YcIZqTEpup8DYSQwYCfzaKwQSqQM4g8urZUTFq3/ImnTejDheQ4L7ncQAAAgAElEQVSrW1UCU04zsGmjf9/LNvtF9zdlH07HxK/3ORBOAjMnup/kMQbpvEq3+zLpciJLCFwFS2Bm+I5EQlLZiA1JPKzellIYG7AkZpLnOMSPi7BZyWotlgmwGzY19MNYrXAoJkX3/4JwEpjZ0f3kXreSkPTFplQEm52tbOI55NUDEw7Vv0hWtB5MeGZjDdjQ1oKR+hZgM5P9Cqtvqy7S3wNbF+ch4C0EsmJ9JG7jz/EZRNpOxmYm+ILvQCRTk7FpCmdiUymuwoZGrAaewqZWvN5bdJ31M+w1X2u1Xykkrf8iWdF6MAH6NvYZMNVzHOLPQGz9vF8B07Dhx7/BptkOYZKu7p7Blv4IatTNztiJ/CvfgYh0yExsAotGF/2U3NP6L5IVrQcToP/E2j0f9h2ISAeMwBY/vzv+QShDyP6JLca4h+9ARDqgHzbc6AlgredYJBWqf5EsqQ4mQLOi+9d4jUKkM/bAel4ejn8QSgKzCXgUG2K0hedYRLK2K9bzMqvehlIYqn+RrKkOJiyPYG0fJTBSBvHrPLgEBmxITR9gb9+BiGRsn+j+716jkDRNReu/SLamo/VgQvIyVvu5B+GstC7Sk83aPSElMPdF96/zGoVI9g6O7v/U5H4VYEfgbcBXo/3fn2Jc0jqt/yJZ03ow4bkPGIwu3Er4DsaGzD/kO5As7IQVtIW0aI9ILY9hK+sO6mWb7snKH7HGi8Nmb1uDrSVwYaaRSgNcBdw8cCf6jkRC56aBC2ldiLI7A3tP/5jvQEQyNIZuBfwhmgcsIrAp1kSqjAI2YglJta2B44DzgduB5dgH2xrsqoWrcXsZ+wAUr9wUcJvAjfEdiYTOnQvuvvrbSUFMwd7Lf+07EJEMnYC9zi+q/mFo80T/Gfh3YBdshiaR0ByMDf28N7q/H1sLph82rXKFrud1b9Ms9yewBaEKaiqqf5HOmA5cYnUwlZW+g5GGTAC2A7aNbttH378Da+cswYbOV7BGnkho4mGv91b/MLQE5i4sgTkKJTASpiOj+7uwLtW/YrNzbMISkmYogckH1b9Ip1TXwdzqORaBIVhCsi1JkjKJJFnZFrsItR54HngWW8zvcaxwfy02u9xbsGJ+TewiIToKG/LeZeRJaEOtdgb+AfwOeJPnWESy8BSwDTAaq2UBmzr8DOAzwHCaW9xyKTA7us0DXqj6fnb0e8mMq2DP+QegcoPvaKQM3DTgUaic5zuSEhgFvAob4jsh+rr6+x2wdtgaur73dn8vnos14Gr5T+DHwKeAr2T0d4j4MgFL3v8IHFr9i9B6YJ4EnsaGZAwGVnuNRiRdO2GF+beSJC9gtSzfAn4AvB24ANgKu0LX20WKZdiHX/WwhAOj77eKtlkBPEfygVp9m4e9scxDC2q2Suu/SKfNAE72HUTBjcKSkDgRmVh1vxV2kSl+D16HvYfOjW7/wOoUn42+fwZ7D2/V77ChY0ejBEbC8yasHfO77r8ILYEBa9ydjWVqt3mORSRNR0f3m53IkXXA1cA1wJuxD7OdsFqZWlOmz6bn4s9BWCKzTXSbgH1Yvwqrw4k/uOOZ0BYD87EP6vh+AfYhvaDq+/V1/8pymYrqX6SzpqM6mJ4MJUk+4vvq5CR+Hxwcbf8iXS/kPIE9v89jF3nmRj/PsjblOWwh74OAEdgELiKh6LHdE2IC81ssgTkBJTASluOj+3pj1zcBNwO/AY4FvoiNjwboW7Xdk70cYw02XO2pOo81mtpXHl8NvCH6fjx2JdJhScx8kg/8F7CZAxdEt8XR94vqPG4oVP8inVa2OpiB2DSs47D3prHR91tj703VCcvQaJ+VJBdjnscSkfvYvCd6Taf+iDp+i73nHg380nMsImkZhNW/zKNGfVdoNTBgDaX52HjRrel53KhIkYzFPjAfBfZqYf8jsCmWD8SmYQbroflcGsE1YDy1r2pujf1t46Jt4gbEBpJkZgGwkCSxmV/19cLo9y916O9IkepfxJdC18FUsPeMOBHZCnv/iH+2VbffjYj2cyTvG4uwRlFPvcXtDOny4d+AvwD/B5zkORaRtJyIvaa/A3y4+y9D7IFZB0wDTsOubt7hNxyRVLwVO1+va3H/P0S3A7FE5khsCFmnxD0s9WbJGUzXRkh1cjMWmyK9+ncDov1WUzu5WYgN86h18123o/oX8SVPdTDDgS2rbqOr7uPzfDxJkjKGpCf5Jeycr77A8Qi1L3YswnqnQ3Q/Vv/7Juz5XOE3HJFUvC26r9nuCbEHBuwkvgWbmeNMz7GIpGEGVte1M/WHdTViMnal0Xcjvl2jsMbNGKxxM4GkkRMnPdWNoyFV+76MzbJWK7lZ0sPPX6TrBAptcGcBZ0KllR41kTa4fbB107ZMsQ5mFEnyMYqu511vt+oLqdXn4xKSxKO6F3YhlpgsRhP1VLsE+CRwKvALz7GItGsIdt6vxEZubHbxIdQEpj/2BrcJG6qyzm84Im2ZiI3B/jvwWs+xFN0gGm9YVd+GVR1jDUkjayn2BrsSK55djl39XFl1q95mRbKtuxaYD5WPZvbXitTk+mIJwDugcgf2+h6JXb0fFt2GR7eRVT+Lb6Oi31UnKnF7YhM9J/+1btVJS6g9JJ3wWuBBrP7x+DrbiuTdKVg917eAmp+RIQ4hA5vp6Drg/Vgx//V+wxFpy3uwWcSu8RxHCOL1Fl5ocr/+1E5sRpE06kZgU1HHjb/qxt4wurzfVoAFDj76IlakGCc21cnOSmyIzErsPW0Z1mO2Kvr5+mj79dH3qyh+j5r0bjj2WhyBFacPwerG+mOvs/7R90OwZD1OSrolJpWR8LvBMOtmNm8HrCJ5Pa7AXnfVr8mF0c9WkCQfS0l6LZdl8HdLfQ8BjwHHYLWFzb7HieTJ6dF9j+2eUHtgAPbBVin/PfBGz7GItKqCDRmLpzQuy+xcIRrMKw3Iz+4FF14HB5wGf3HJz1+52h03OIfScwO1pwVLl2NJzQosYVtN70nQOpKi5Xh7op/FvdcrSCZ/iBc33Ugy1r6nY4RiBHYRoYL9j8Aa/nHPXJxMgP2f4+nF4/9dT8fo7X87IvrZ8G7H7K76fxv/H17G/g9xr2B1AhIlHnccBa89EEa9M/p5nJRs3OwRpCg+BlwKfBb4sudYRFq1HVaj+yiwZ08bhZzAgHWn7o2thfG051hEWnEkloT/ElukUoKQWv3LSKyRO4ykkdvMVfqBWEH08Oh4A4Atoq8Hkax3sQXJhAnD6Toddz31GsXVyU+rx6jWW2MfuiYQrR6ju9UkU+rGiSFY8rAJmwEr7pnYgCUM0HPysY7avWu1ktMWZVIHI36NJlmDZkc0JE+K6QJshtQPAt/raaNQh5DFLsdWJ38f8BnPsYi0Iu5GvdJrFJK2tNZ/iRvFvnrmRkX31UlQnCDFqnsqetJtiN1mGjkGWIOt3kJ+1T1LjRyjp+Qj7skqqrKtB1MGS4AbsItdR6K18KR4+gLvxS7O/NxzLF7F0wkuJLmSKFIU22INrX9iQ08kCK4Cbh64E31HImXnpoG72HcUkqrDsaT7Ft+BiLTgFOz1e5XnOHLhG9iT8X7fgYg06evYa/ds34FImtwUcJvAjfEdiZSdOxfcfb6jkNQ9iPUk7u47EJEm/Rlr92h5AWB7bMzwk+gqthTHcGzYyhKSmgQJgjsL3EzfUYhYHYxbD66R4XlSHKdijcArfAci0oTDsNethrRWuRZ7Uk7wHYhIgz6BvWa/6DsQSZu7Ftw3fUchYuvBuKXgjvYdiaSqP/AMNtHDBM+xiDRqGtbuOcJ3IHmyH/ak/IXwZ16T4huMzSKzGltNXoKh+hfJG9XBBOrjWLvna74DEWnAnthMk3/zHUge/Q47mY/zHYhIHedir9Vv+A5E0qb6F8kb1cEEagtgAXYhbKLnWETquQlr9+jiXg37YEVts1AtjORX/KGzCltNWYKi+hfJG9XBBCy+GPYt34GI9CJunz+IRkn16EbsZD7JdyAiPTgPe41+xXcgkgXVv0jeqA4mYIOA57BamG09xyLSk3iE1LG+A8mzPbAxdo9jRW4ieTIaeBFbu0hDjIKj+hfJK9XBBOxsrHH4E9+BiNQwFXt9/tlzHIVwFfZkfdRzHCLdfR97bX7GdyCSBdW/SF6pDiZgA4AnsIu3+3mORaRaX2Am1u45zHMshTAeWA4sRVe5JT92w9YrmgsM8RyLZEL1L5JXqoMJ3LFYI/FeVGMg+XEm9rr8pe9AiuTT2JP2fd+BiERuw16TJ/sORLKi+hfJK9XBlEBcZ/A234GIYIt1z8MmLJrkN5RiGQTMBjYA+3qOReQU7INluuc4JDOqf5G8Ux1M4OJe/mexxqOIT9/F2j0X+g6kiI7Bnry/o4J+8WcE8DywFtjdcyySGdW/SN6pDqYEvoq1e77nOxAptf2xmqx/YUtHSAv+FzuZz/MdiJTWT7DX4Pme45BMqf6lA0YAd2JXmqVpqoMpgcHAU1jj8WDPsUg5DQAewdo9R3qOpdDGAYuxlWp39hyLlM/rscWbHgMGeo5FMqX6l4yMB44CPg+8gH0o7uM1osJSHUxJvAH73Hkcfe5I530Be5++ynMcQTiNZA7qfp5jkfIYCcxBV8JKQPUvGTkMuBu4EruSFw+PUQLTMtXBlMSV2Llyme9ApFT2BdYBC7B17yQF12En8wW+A5HS+AX2mvuq70Aka6p/6ZCLUQLTJtXBlMQIrP5gE/Amz7FIOWyBrUfkgOM9xxKUMdjwg43YqqAiWXo3dhI/jM2IJ0FT/UuHKIFpm+pgSuQgbCbW+dhQTJEsXYGWL8nMUdjViDnAKL+hSMAmYwupvgxM8RyLdITqXzpECUzbVAdTMhdi58xNaIFLyc7J2OvscbRQd2YuxZ7k3wB9PMci4RkCzMReY+/3HIt0hOpfOkgJTCpUB1Mi/YD7sPPmk55jkTDtgl20XQPs7TmWoPUHZmAn8xc8xyLhuQp7bf3ccxzSMap/6SAlMKlQHUzJbIsVVW8E3ug5FgnLUJIpk//TcyylMB54DjuZ3+w5FgnHh0gWTlUXammo/qWDlMCkQnUwJXQ4sB5YAuzgORYJQwX4JfaefLXnWErlIGyqtxdRnYK073CS19Nkz7FIR6n+pYOUwKRCdTAl9Qns/HkQrY4u7fs09np6AE1W1HHvx5782WiGDmndbsBS7OqWGgSlovqXDlMCkxrVwZRQhWR6/5uBvn7DkQJ7BzYp1jxgO8+xlNbXSDJIXZGQZo0FnsJeQ2d7jkU6TvUvHaYEJjWqgympAcCd2Hn0I8+xSDEdDKwGVgEHeI6l1PoAv8JO5v9DVySkcUOB+7HXzlc8xyJeqP6lw5TApEZ1MCU2GvgHdi6d6zkWKZZdgcXY+kJarDIHBgP3YifzT9Fc6VLfIOAO7DXzSzQld0mp/qXDlMCkRnUwJbcjsBAbBnS651ikGCYBz2LvwR/yG4pUG4kVtjnge55jkXzrj40fdsDtqHitpFT/4oESmFSpDqbk9sQmntkIvNNzLJJv44EnsPffL3qORWqo/gdpSJDU0pdk2sC70XTJJab6Fw+UwKRKdTDC64CXsFk0T/Aci+TTWOBR7L33Ms+xSC8mYrOSOeA7aDiZJPphc5074G/AKL/hiF+qf/FACUyqVAcjALweK8peC5zsORbJl/HY2nYqsSiIHUiSmO+j+gaxYWLxsLEHsCJIKTXVv3igBCZVqoORVxyNzSq1HniX51gkH7YHnsTec69EbeHC2Ap4GPvH/Ry7+i7ltAXwe5JhY8P9hiP+qf4lJf2AG4DHsak567kEOw/3zTKoclEdjLziEGA5Vtj/Yc+xiF/VF/K/h5KXwhkHzMT+gTehdWLKaDzJVMm3opoXAVT/kpqp2LnlgB/W2bYvSS/oezKNqlRUByNd7I8V9m8CPo+GDJXR/sAC7L32S55jkTaMAv5IMnRoK7/hSAftBvyLZKrkgbU3cweBO6VzYYl/qn9JyTDgNmzylP1q/H4ydgFhJjAfG+LyEjZe/yls5sjrOxJpsFQHUz6uP7jzwfV0UXZP4HmSuocBnYpMvDsReBlLYD/uORZJwUBsGJkDngP28huOdMBBwCLsf/4teu0+de8Ctxbc2zoTmvin+hcJhepgysX1B3cDuKfBTexlw62Bh7DPwDuwpSYkbB/BptReA7zDcyySogrJ+OvlaLrBkH0Qm41lPfD+xnZxbwe3xpIZCZvqXyQ0qoMph1eSlzngJjWww3Csh9RhdWpTMgxO/BkEXIH9nxdhU2tLgM7AstNNwJexcdkShiEk0yS/iM3K0gQlMeWg+hcJjepgwtd08hLrD/wA+1xcgaZZDs32WHmEw9Z62dFvOJK1fwPmYv/w29CUuiGYTDJhw9+j71ugJCZ8qn+R0KgOJmwtJy/V3ovVnm0CvopmZg3BkSRD5a8DhvoNRzplDMnUuvOBY/yGI214K9bj4oBf0PZsc0piwqb6FwmN6mDClUryEtuLZGrdv6Cr9UXVDzgf2BDdPolmmyudftgUcxuwwqev0eNMVZJDI7CExWGzG52V3qGVxIRJ9S8SKtXBhCfV5CU2GltWIh5SdlpKx5XO2AX4K/b/mwsc5jcc8e1QYA72gvgbsIfXaKQRryf5n83EpkxOmZKY8Kj+RUKlOpiwZJK8VDsTm243XmZgbAaPIempAB/ApqGPh4yN8hqR5MZw4GfYC2M9NvWuFr7Mn5HAj7BxvJuw/1OGvWZKYsKi+hcJlepgwpF58hLbFVuLKZ745r/QUKQ82hGbCtthSedH/IYjeXUS8AL2QnkC652RfHgrMI/kf3NIZx5WSUw4VP8ioVIdTBg6lrzE+gOfxQr8HXAr0InHlfr6A5/DZs51wO/Q/0bqGIFd2d+IXeW/Dr1ofNoFuIWkd+wSbN7zDlISU3yqf5HQqQ6m2DqevFSbTHKVfxX2OavePH+OAB6ha++YSMOmkryAXgI+Dwz2GVDJjAa+iyUtDrgbeI2/cJTEFJvqXyR0qoMpLq/JS6yCNZTjqXnnAu9Ew8o6aQrW0+Kwi+g/wWbNFWlaP2xl9yXYC+oZ4HSsa0+ysQU2LWD8nM8BTiEXb6JKYopL9S8SOtXBFFMukpdqo4BvAuuwz+D7gDd4jSh822ALjsbP+Z+A/bxGJMHYEvg2yYvrn8C7gb4+gwrMYOAcYAFJr9fnyF2vl5KYYlL9i4ROdTDFk7vkpdquwG+xz2MHTKdjtaelsRVWshDXIKnXSzLzKuAqkmFNj2Mr3Gr9mNYNAz4GPE8y/vZSYJzPoHqnJKZYVP8iZaE6mOLIdfJS7RAseYkTmduxVeClddsD3yCZyvoF4EOoLSkdsDPwc2yMYvziOw/Ny92MicBXgGXYc7gG6+Wa4DOoximJKQ7Vv0hZqA6mGAqTvFR7Aza0KU5kZgL/gYbUN2NvrO0YXwRfAJxL7kaaSBnshI1bXIW9GFcCPwT29RlUjlWAw7ETeC32nC0Bvox1pRaMkphiUP2LlIXqYPKvkMlLtcOAm7FZWh3wHPAFYFufQeXYIOAdwJ0kyd9TwFnAEI9xiQA2S8TnSeo3HPAQtnLqSI9x5cUE4FPYSRs/P08DHwaGeowrBUpi8k/1L1IWqoPJt8InL9WmAJeT1G9sxGpmTkS9MmCzpn6TZEKiuDj/LUAfj3GJ1DQAOA5bO2YDydCom7Gi/+H+Quu4UdjffDPJ5Adro+/fSlATICiJyS/Vv0jZqA4mn4JKXqqNwKZf/htJQ/1F4GqsPVSmZGZ74CPAH0mei2XAj7DhYyKFsANwAdbTEL+QVwG/At5DIYdM1fUqrNfpVpIxnvFY2Y+T68L8dimJySfVv0jZqA4mf4JNXro7APgxXXsdFmJD64/HlkkISR9gH+AzwIMkf/MGbLKDd6H6Fim4fYGvYmuaxC/wTdgwsy9ji2YWcSzkCOBNWDfpEyR/m8MWAf08sIu36DpOSUz+qP5FykZ1MPlSmuSlWn/gGOCnwFKSdsEa4A/YBc29KeZIjInYVMf/A8wn+ds2AndhtS0BX6yVsqpg2fpnsbGQ8TAzhw2z+gs2td7JWG9GnuYC74vNC/8u4PvALJJZ2OI3pt9jUyLv6inGHFASky+qf5GyUR1MfpQyeeluIDbt8mXY0hPVFzpXYj0V5wNvJH8N/0FYm+1s4Bq6XoSOh8r9EhtVM95LhDmWpwaspG8UdmIfBhwMvJquxV0rsZ6MWdH9bOwE+hdWOJeFYdjwtx2AHYHdgT2i+0FV263HepDuxWbXuAub01xwb8fWCzoDKtd4DqbEXAWb4vwDULnBdzQineOmAY9C5TzfkZSX64/Vw+4NTIXKHK/h5MckLFk5BDgo+r7aAuBhrN3zGNbemQM8i7U7sjAuimMHbHbZPbAi/J3o2ku0EvgzdgH6duyi88aMYio8JTDlMhx4HXAgsCd2Au1A7dfBfGz6wkXYeNMXo/uVJInESyQn/ECSYWrDsdm/RlfdxgPbRF93txGbQexhrJ7lT8ADWE2P1KQkxj83BXgUGAeVxb6jEekcdy5wMlQO9B1JOSl5acJELJHZH0sc9qB2T8wGbMHr57G2TvXtJWySIEjWlgOruRmAtaFGYhdoRwNjq+4nUbs2Zw32+TELm6DgjyQjT6QBSmDC9wVsLOWcHn4/DOuZ2Y3kCkF8P4H0XiMbsYRoDkkvz2zsCsij2Mlcyx+iOA7HivVS5t4OPAyVR9M/dtaUxPjlzgLOhMpeviMR6Sy3D3aleEuorPQdTbmEkLy4k4EnoTIrg4MPBT4HfJGeL4KOxy7g7oq1d+LbDsCWKcaylqTNM4ek3fMwdtG2p2TlRazt8y5s+L/UoAQmbO8BvouNsfxHC/tXsKsIY0h6UgaRrD0zmGTY1yrsZN0ELI++j69eLMYK7VrxI2wKxZeBU4GbWjxOD9y9wM+h8r10j9spSmL8cdcC86HyUd+RiHSW64u9r78DKrf6jqY8QkheANyDwA+gckUGB/8psF90a2Uo/AC6jh6J2z3xmnLDgH7R1yuwJGQd1kZZgZ0Xi0l6blrxNFan/BS2rssjLR5HpJAmYyfTab4DadPOdJ068GpSXe/G3Qju/PSO54MK+ztP679I2Wk9mM4KqWDfzQV3QgYHPhlLWvbI4NiddAZ2MXgTNkz/42hxSimJ/lj3/vW+A0lJ9RTKa4B52NTQKXCXg/t+OsfySUlMZ2n9Fyk7rQfTOSElLwBuNbi066e2xYZevT/l4/owiq6zyK7B6oIn+wxKpBMuwcZajvAdSEo+jV1VqZ4PfSM2PG5ge4d2F4ELJNFTEtM5Wv9Fyk7rwXRGcMnLMHAO3I4pHrQvVgQf0myQd2E9MNVLYazChtSLBGl/7IV+sO9AUjSZrnOjx7fV2BjRvVs/tDsH3PS2I8wNJTGdofVfpOy0Hkz2QkteANzkKIFJ8wLrudjMqbVmOS2q0+l64Ta+rQduxmqTRYIxECv2CrFhVT2MrPq2Ibp9kZZW3XWngnsstShzQUlMtlT/ImJUB5OdEJMXAHcAuLXROlpp2AErlj8ppePlRfdhZNW3tdjESMd7i04kZV/Cho4NrbdhAX0a6z6tdTLHY0T/ii2O2QR3FLhFaQaaD0pisqP6FxGjOphshJq8ALjjwD2X0sH6ADOwmdlC1H0YWfVtE5bg/IIw23xSInthWfkRvgPJSE/DyKpv67Au1ybGiLq9wW0E16/+tkWjJCYbqn8RMaqDSV/IyQuAOx3cQykd7APYlMXjUzpe3vQ0jKz7UPoXgEM9xSjSlgpwH5DFnOp50tMwslq3Yxo7pJsYjcettTJvAJTEpE/1LyJGdTDpCj15AXCfAndbCgcaDywD3p3CsfJqS3oeRta9N2Y9NuxMpFDei00fONZ3IBmrN4ws7lJtoh7GDYiGA+2eQbw5oSQmPap/EelKdTDpKEPyAuAuBZfGostXYTOPhb4g+530PIwsHnnyErbgpUihDMe6D8/2HUgHvIqeT+K12NWYNzZ/WLcM3OGpRZlLSmLSofoXka5UB9O+siQvAO5qcJe1eZB9sYb7XikElHfvo+dhZGuwiZu0PowU0jeAh4EAazhqqjWMbD1wL7B1a4d0T4E7JaX4ckxJTPtU/yLSlepg2lOm5AXA3QruvDYO0Af4C7YOXBn0NBvZBqxsoM218ET82BG7ClGm4q3qYWSbsAUtV9HW2E93L7gy9GChJKZdqn8R6Up1MK0rW/IC4B4Ed0YbB3g3Vri/ZUoBFUH1MLJ1WPIyy2tEIm26BlvQqEx2IjmJXwSOBB4DvtD6Id2N4M5PIbaCUBLTGtW/iNSmOpjmlTF5AXBzwZ3Q4s79gX8Cn0gxoCJ4HzZcbA2WuBwSff16n0GJtGp3bOhUG6vQF9bfgXuAraLv3wqspOVJDNzl4L6fSmSFoSSmeap/EalNdTDNKWvyAuBWgzuwxZ3Pwmp+h6QYUBFsiSUs3wUGRD/7NjaULvRJDCRANwD/6zsITwZj42BjFexE/kprh3MXgbu+/bCKRklMc1T/IlKb6mAaV+rkZRi2bEGTC04DMAh4FktiymiLbt+PBVYAx3qIRaRle2JjIHfxHUiOHAO8DLRwddydA256yvEUhJKYxqn+RaQ21cE0pszJC4CbHCUwI1rY+UPAv7BhZGIuBh7wHYRIM64BrvMdRA49BHy++d3cqeAeSz2awlASU5/qX0R6pzqY3pU9eQFwB4Bba++nTemL1b6UZLKdho3DJjGa6jkOkYZsgxWwH+A7kBz6D2ABNsSsCe4ocIuyCKg4lMT0TvUvIr1THUzPlLwYdxy451rY8e3AEjYfRiXwY8o3mZMU1KXADN9B5FR/4Bngv5rbze0NbiO4sqyl0wMlMT1T/YtI71QHU5uSl4Q7HdxDLez4F+CCtKMJxC5YScFuvgMR6c1gYCnw774DybFPYrOUNcFNjMbljsskokJRElOb6l9Eeqc6mM0peenKfQrcbU3utOR6/t0AACAASURBVD+wFhifQUChuAX4ju8gRHpzKjAfFbH1Zjw2xG7fxndxA6LhQbtnFVSxKInpSvUvIo1RHUxCycvm3KXgrmlypx8Dv8oimoC8Bbu43eTweZHOuYOWpwoulZuAJtd1ccvAHZ5JNIWkJCah+heRxqgOxih5qc1dDe6yJnaIR528OaOAQtEPmIfVConkzg7ARmCK70AK4HiavhrhngJ3SlYBFZOSGKP6F5HGqA5GyUtv3K3gzmtih3cDz2MNdOnd14Hf+w5CpJaPA3/1HURB9AdexBKZBrl7wWmKxs0oiVH9i0ijyl4Ho+Sld+5BcGc0scM0oJkemzLbGyvmH+07EJHu/gh81ncQBXI1cGXjm7sbwZ2fVTDFVuYkRvUvIs0pax2Mkpf63FxwJzS48VBgNXBIhgGFZjbwHt9BiFQbh2XWr/YdSIGcBCzGFsBqgLscXJN1M2VS1iRG9S8izSljHYySl8a41eAObHDjJj/DBfgWcIPvIESqnYZl1tK4ocAaoME3S3cRuOuzDKj4ypjEqP5FpDllq4NR8tIYNyxarmDHBnf4CfDTLCMK0OuBl4ABvgPphD6+A5CGHALc5TuIgnkJeIDGu58XAWOzCycElWux7ukrSpTEHAZM9x2ESIHMxN5/D/IdSPZcf+A6rP5gKlTmeA0n3+J11hY1uL3aPc37EzbhwWt9ByISewJ4r+8gCuhi4ObGNnWngnss02iCUZaeGNW/iLSmDHUw6nlpjjsA3Fp7X61rPOCAV2UcVIj+BHzCdxAiYL0Cm4CdfAdSQMdgs5E10NPojgLX6JUhKUUSo/oXkdaEXgej5KV57jhwzzW48UnY9MnSvIux2dtEvDsCWOY7iIIah13FmVR/U7c3uI3gNN98w0JPYlT/ItKakOtglLy0xp0O7qEGN74QNcJbdRLwjO8gOkE1MPm3G/C47yAKaiGwBNilwW37AFtmGlFQgq+JUf2LSGsCrYNRzUsbxtJ4/csUQEO6W/MPYFtsIqOgKYHJvykogWnHP7DnsJ5FWG+NCvmbEmoS4ypYAjPDdyQixVPZCNyDnUOBUPLSpmYSGF24bd2TwEZgZ9+BZE0JTP5NAp72HUSBPQ3sUH+zyjpgBclMKdKwIJOYXbFC0nt8ByJSUDOAqb6DSIeSlxSMxUY6NGISave0ah3wHA21e4pNCUz+NXPSy+YWAo0WYWsq5ZYFl8RMBWZBZbHvQEQKajqwb/HrYJS8pGQcjfXADAUGo3ZPO0rRllHBcv6Nweo4pDVLgNc0uG0pTvrsVK61UXhcZfeVa/zG0xbVv4i0p7oO5lbPsbRIyUuKGh1CFl9wVLundYtp/MJtYakHJv+GA8t9B1Fgy4ARDW67ECUwbQqhJ0b1LyLtK3odjJKXlDWawAyP7tXuaV0z7Z7CUg9M/lWILmtLS5p5/hahGpgUFL4nRvUvIumYAZzsO4jmKXnJQKPD4eOFLtXuaV0FWz8waOqByT9HckJL8/rQ+ImsIWSpKXRPzFRU/yKShukUrg5GyUv63DBgEI31wMSf12r3tK6Zdk9hKYHJv3XAQN9BFFh/YEOD2yqBSVVhkxjVv4iko2DrwSh5yUg8sqGRBGZ9dK92T+sG0Hi7p7CUwOTfQjSsqR3jgQUNbqvnOnVFS2JU/yKSniLVwSh5ydBY7GLsiga2jYeZ6bO4deMpwSxuqoHJv/nABN9BFNhW2HPYCPXAZKJQNTGqfxFJVwHqYJS8ZCwq4K80UteyFFiLtXv+lWlU4Wqm3VNY6oHJv2eB7X0HUWDbA883uO0iYEtwSuxTV5iemKmo/kUkTdPJdR2MkpcOGEfjPQIOW4hxUmbRhK0/sDX2HAZNCUz+zQL29B1EQVWw525mg9svxM6JLTOLqNQKkcSo/kUkXTmug1Hy0iGNTqEc+zuwR0axhG4KlsQ87DuQrCmByb+Z2EKMfX0HUkDbAaOwN8NGLMKu/mgYWWbynMSo/kUkfXmtg1Hy0kGtJDB7ZRRL6PYE/oldNBDxaig2HnR/34EU0GnA7OZ2ccvAHZ5JNFLFvR3cmnwlMW4KuE3ggl/BWKSz3Lng7vMdRcL1B3cDuDngJvmOJnzuanCXNbHDkVjB/4CMAgrZlcBVvoMQid0BnO87iAL6X+B7ze3ingJ3SibRSDd5S2LcWeAaHW4oIg1z+4Bbn486GCUvneduBXdeEzsMBFYCupjYnApW+/I234GIxM4FHvIdRMEMBJYAxzW3m7sX3NlZBCS15CmJcdeC+6bvKETC4/qCWwruaM9xKHnxwj0I7owmd5oG6P24Ofti6+iojldyYwL2otzbdyAF8nZgMU0vhuVuBHd+BvFIj/KQxLgKuHngTvQXg0jI3DRwF3t8fCUv3ri54E5ocqe30dJneKl9H7jJdxAi3U3DXpzSmD8Alza/m7scnJ7njvOdxKj+RSRbPutglLz45VaDO7DJnQZgM4NqOFRjtgCWAcf7DkSkuzcCL2MLFEnv4m7UnZrf1V0E7vq0A5JG+ExiVP8iki1fdTBKXvxyw8A5cDu2sPNFwINYbYf07mPYpEVax05y6R5a6lUonZuBq1vb1Z0DbnqawUgzfCUxqn8RyZaPOhglL/65yVECM6KFnUdjs5E1O/ysbAZhC3Y3W2ck0jFHAKtpqWehNA4H1gE7t7a7OxXcY2kGJM3qdBKj+heRzuhkHYySl3xwB4Bba++zLbkQeAxNqdybL2Brv/T3HYhIb64H7kJdqrUMxN7o2viAdEeBa2bBLclEJ5MY1b+IdEan6mCUvOSHOw7cc20cYAjwNPC5lAIKzc7Yhe1jfQciUs/WWKHWmb4DyaGvY290Q1o/hNsb3EZwGkfqXaeSGNW/iHRGJ+pglLzkizsdXLvLQLwZa6TvlUJAIekP3Atc6zsQkUadgp3Mr/UdSI4cC6wFXtfeYdzEaLzuuDSCknZ1IolR/YtIZ2RdB6PkJX/cp8DdlsKBvgc8BbRSSxOqrwNzsVohkcL4LjbjhGYlg1cDS4GPtH8oNyAaTrR7+8eSdGSZxKj+RaSzsqqDUfKST+5ScNekcKCBwF+xSXo0QgJOBdYA+/sORKRZ/YHfArOAkZ5j8Wkb4BngR+kd0i0Dd3h6x5P2ZZXEqP5FpLOyqINR8pJf7mpwl6V0sK2BfwE/o9x1wG/AkpfTfAci0qqhwP3YGMhRnmPxYRI288avgb7pHdY9Be6U9I4n6cgiiVH9i0hnpV0Ho+Ql39yt4M5L8YBTgMXYwt59UjxuURyFrQl4ju9ARNo1EvgTMJNyDSebAjwLXEfq0yu6e8Gdne4xJR1pJzGqfxHprDTrYJS85J97EFza65O8BpiH9cSUaXrlk7Cel4/5DkQkLVsAt2IN+n/zHEsnHA8sBy4n1Z6XmLsR3PnpH1fSkVYSo/oXET/SqINR8lIMbi64LBai3AmbdfQeYHwGx8+TCrbWy1q0WKUEqC82I8Vq4GzCHB86ALgIO4k/nN3DuMvBfT+740v70khiVP8i4ke7dTBKXorDrQZ3YEYHHw3cjl28DbVudRw2ccFCYKrfUESy9VZgCdYjs43nWNL0GuAhrIDvkGwfyl0E7vpsH0Pa124So/oXET/aqYNR8lIcbli0LMGOGT5IX+ACYB1wGW2tA5c7JwILgOnAtn5DEemMrbEZyl4CzsOmHyyqkcA3sTenK4Hh2T+kOwfc9OwfR9rXThKj+hcRP1qtg1HyUixucpTAdGLtlgOAJ7BZSU/uwONlaSfgFmAVcC7lnKxASu4tWI/FbOB92NTLRbEF8N/AIuBvwMGde2h3KrjHOvd40p5WkhjVv4j41WwdjJKX4nEHgFtr77cdMQD4JLASq415Q4ceNy3bYAt2rsVmV53kNRoRzwZjM1bMxwrePkK+V7KdgBWrLcCmSD6NTAr1e+OOAreos48p7Wk2iVH9i4hfzdTBKHkpJnccuOc8PPAE4FtYTfA9WI9Mnhe/3AP4MTbD2AzgUL/hiOTLECx5eRK7OvFj7CTJQ9dkf+A4bErktdhqu6fh7Q3H7Q1uI7g8v+HJZppJYlT/IuJXo3UwSl6Ky50O7iGPAUwEvobVBT8LXAjs6jGeaiOA04G7gY3ANMKdiEAkFX2AY4BfYlcn5mKr2J8CdPJq9DZYkvIzbJjYUuAKIKvZSprgJkbjdsf5jkSa1WgSo/oXEb8aqYNR8lJs7lPgbvMdBXYB931YMfxGbFj6V4AjsVEqnVABXg18FPgN1v6aA3wJ2KFDMYgEYwTwH8DVwAvYif0Q8FUsodmTdE7uYcC+wKnAd4HHAQc8BfwQm2ljUAqPkxI3IBpetLvvSKQV9ZIY1b+I5ENvdTBKXorPXQruGt9RdLMdNhrlZmAFlkjcAXwGOBbYkXRGf4zDRrmcCfwcW3hzA/AAcDFW1xvichcdoydPqu0GHAG8HhuTuX3087nAP7AampXAMmyGs5eje7DZwYZixffDgS2BycAuWDfuBmwygYeAP2BvGHMy/nva4JYBJ0LlLt+RSCvc24GrgDOg0u0D1E0BHgXGQWVxx0MTkYg7FzgZKt163l1/bEjx3sBUqMzpdGSSBnc1sBgqeV05vh+wP1bsfxjWQzIOm/X0n1i7Zy7WzlmJLaD9MlarUsFmSR2GtXuGYgtq7gLsDIzCZhF7ErgPa/PcBbzYkb+sBDTGX6o9Ft2+HX0/CDsR4xNyEvAqNk9WwJKaOKF5CVgM3AZ8B+txmQ2s78DfkJZFwFjfQUirKtdaJx9X2X2XJGYqMEvJi4h304FLrA6mstJ+pOQlIOOwz/+82gD8KbrFRmJtnl2j+62xIe8jsGRlKDYyZSPWgxMnNXG75xos8XkSS35cB/4OEZGYuxfc2b6jkHbVGk6m+heRfOheB6NhY2FxD4I7w3cUEqY8zEAlkkcLUQ9MACrXAu8BrrAkxlWwoQIzvIYlIkBlIzbN7WHqeQnSWGw0g0jqNIRMpLZFWPe3FF71cDK2xsYp3+MzIhF5xQzgrSh5CdFY7GKgiIh0hrsI3PW+o5A0ubdH60484zsSEYm5/aNZH+dq2FhI3LBoOYIdfUciIlIi7hxw031HIWlz94Lb0NhilyKSLdcf3I3RwsGn+Y5G0uQmRwnMCN+RSJhUAyNS20I0hCwwroItGPYtXqmJERE/Xql52QubXjYvK6RLOsZi0xGv8B2IiEiJuKPAqfgwKG5KNFRlTP3FLkUkO91nG3PngrvPd1SSJnccuOd8RyEiUjJu72hYgya6CIY7C9zMqu+VxIh0XK2pkt0+UX3aMJ+RSZrc6eAe8h2FiEjJuInR+F0NIwtGrfVflMSIdE5P67x0Xw9Gis99CtxtvqMQESkZNyAabrS770gkDa4Cbh64E2v8TkmMSObqLVLppoG7uNNRSVbcpeCu8R2FiEgJuWXgDvcdhaShuv6l5u+VxIhkpl7yAqqDCY27GtxlvqMQESkh9xS4U3xHIWnoXv9ScxslMSKpayR5AdXBhMbdCu4831FIuFSgLGnaEvgQcCywElgLzAUuBIo4G8kibCpIKb7DgOm9b1K5FhzAVXZf0fAHkba8MlXy3sBUqMzpZeOZwEvAQcCtmYcmWRuLfYaGLrR2j0jpHAo8Dbwf6F/18+2wD7DBPoJqj7sR3Pm+o5B29Vb/UnN79cSItK3Rnpcu+6gOJhhuLrgTfEeRsQDbPSLlcgC2WNVRNX73A+yKRAFrSdzl4L7vOwppV736l5r7KIkRaVkryQuoDiYkbjW4A31HkaFA2z0i5TEEmA18r4ffX4eNxzmoYxGlxl0E7nrfUUi7Gql/qbmfkhiRprWavIDqYELhhkXLEOzoO5KMBNzuESmPzwGbgMk9/L4/sE3nwkmTOwfcdN9RSLtqrf/S8L5KYkQa1k7yAloPJhRucpTAjPAdSUYCbveIlEM/rEjtft+BZMOdCu4x31FIO5qtf6l5DCUxInW1m7y8chzVwRSeOwDcWnv/DU7g7R6RcpiKdZN+x3McGXFHgSvDLCoBa6X+peZxlMSI9Cit5AVUBxMCdxy4UGfgmkrQ7Z7i0DTK0o7XRfePRvfDgWOAidjMHL8HVnuIKy2LgC3B9YPKBt/BSEumArOgsri9w2iKZZHampoquRHTgUusjqKyss1jiR/jgIW+g8hI6O2ewujjOwAptL2j+6XAIcBZwH3AlcBG4A7gYD+hpWIhdo5s6TsQaVkD6780qnIt8B7gCvXEiEAGyQt0XQ9GiinkNWBCb/eIlMKd2OXo92JFbd3thi3s1Eb9gU9uQDT8aHffkUgr0qh/qXlcDScTSXXY2GbHVh1MoblLwYXaSx14u0ekHB7CTuQ76XnBpiuBZ7FpBwvILQOnudwLKa36l5rHVhIjJZZl8gKqgyk6dzW4y3xHkZEStHuKQUPIpB1xXcg/6XnM5x+w6QQ/0JGI0rcI6w6X4plKKvUvtWg4mZRVJsPGupsO7Kv1YAprHOEOIStDu6cQVMQv7YgLLB/pZZt/RPf7ZxxLVpTAFFeK9S+1qLBfyqYjyQt0rYO5NaPHkOyEXANThnZPIagHRtqxLLrvbbaR+ApFUetIFqIEpoBcBUtgZmT7OOqJkbLoWPICVDYC92DnsBRPyAlMGdo9haAERtoRL/I4oJdtNkX3RU0CFmHd4VIsuwLjsUZQxpTESOg6mby8YoY9lhTQWMKdRrkM7Z5CUAIj7ZgZ3U/sZZu4yG1+xrFkRUPIimkqmdW/1KIkRkLlJXkB1cEUlBsGDCLcHpgytHsKQQmM/D97dx4mR1Xucfxb2dgMQSABDQgaFGVRIqgBRIJghKvIFRERRFBUxD1yReKCuIF6BdSriAIXBJcIioiC4QKSgAgimyABjSiyKCQsAQIhCZP3/nFOz/R0qqqrl+pTy+/zPHl6pvpU1Tsznap6q857Ti+uAFYAL01ps4l/vSb/cHKhBKaccq5/iaMkRqomWPICmg+mrBo9FqqawNThukekFn4K3ENyMnwKsArYbmAR9ZUdArawfTspjrzmf8m8fw2xLBWQ91DJmWLQfDClYzPAVvg6xKqq+HWPSDWMA34B3EH87LLTcAVrB8a8NxG4H/h6btHlzmaBVfVOUkXlOf9L5hiUxEiJFSF5Ac0HU0a2L9h9oaPoQbtrHqj8dY9INczEjRFrwGkJbQ7A9fV8TdOyjXHDX/6A5MmeSsCmgw2Bacjx0rCjwG5p3y73OJTESAkVJXkBsB3BVqkOpkzsCLCbQkfRg5m0v+aBSl/3iFTDROBS4E7gFSntdgR+5NtehHvEelDu0eXOpoIZmEYiKw2bC/aN0FE4SmKkTIqUvADYWLBHwfYOHYlkZceCXRo6ih5kveaByl73iEgF2ATfHUnjuZdC6PqXOEpipAyKlrw0qA6mXOwkME3qKyISni0F2yN0FJJFEepf4iiJkSIravICqoMpGzsH7OTQUYiICLYILK5YTwqnKPUvcZTESBEVOXkB1cGUjc0DmxM6Cqk+FSaLtKe5YMojwPwvWUVzXV0oZ7vXSN0sJLCg87xk1TwfzLzAsUh7k6nuHDBSIJrIUqS9xSiBKQGLcAnMgtCRJNNkl1IUpUhegGgIuBr3f1uKTwmMiEgx2Olgp4aOQtopav1LHHUnk5CK3m2slepgysOWg+0cOgoREcFOADs/dBTSTpHrX+IoiZEQypa8gOpgysIm+mkHtgodiYiIYLPB5oeOQtop0vwvWSmJkUEqY/ICmg+mLGyaT2AmhY5ERESwQ8AWho5C0hRx/peslMTIIJQ1eWnQfDDFZzPAVvh6RBERCctmgakosdDKVP8SR0mM5KnsyQuoDqYMbF+w+0JHISIiANh0sCEwDTteWGWrf4mjJEbyUIXkBVQHUwZ2BNhNoaMQEREAbKrv1zsldCSSpIz1L3GUxEg/VSV5AdXBlIEdC3Zp6ChERAQAm+C7J20bOhKJU+b6lzhKYqQfqpS8NKgOptjsJDBN0CsiUhy2FGyP0FFInLLXv8RREiO9qGLyAqqDKTo7B+zk0FGIiMgwWwR2YOgoJE4V6l/iKImRblQ1eQHVwRSdzQObEzoKqQcVJYtkswSYHDoIibU7MD90EP0XzQUDONu9RuqaIW3YeOA8YDowE6K7g4bTf7cAy4BdgXmBY5E1TcadK0VyNyZ0ACIlsRglMAVkES6BWRA6knxEc4HDgTP0JEbSVT55AaIh4Grc/3kpHiUwMjB6AiOSzRJAo5BlYhHwYeAgYANgPHAb8AOIftnnnb0Y2AR3UVNRehIj7dQheRm2ADggzK5tb+DdwFrAekAE/Aw4E6KVYWIqlMm4m30iIlIMdgLY+aGjKD5bF+y7YG8H8094bQrY1/xQ1L8Ce3Yf91fR+pc4qomROFWueYkTqg7Gvu+HCV63adnaYJ8Duxlsw8HGUzQ20R/jtwodiYiIDLPZYPNDR1F8dgrYLgnvHe1PcJf3cX8Vmf8lKyUx0qxuyQuEmQ/GZoO9P+X9ozX/iU3zx/dJoSMREZFhdgjYwtBRFJttlZ5M2Dg/mpuBvakP+6vY/C9ZKYkRqGfy0jDo+WBsEdhaKe+P8UlVhYZy75TNAFvhuxCLiEgx2CwwFSemsk+C7dqmzZd8AnNuH/ZXwflfslISU291Tl6Agc4HY88Fe6z9hbldS63nCrN9we4LHYWIiIxi08GG3FMEiWfn4ib8/EhKm3f4BOaPfdjfB+pT/xJHSUw91T15gcHWwdgUf8x6a0qbMWD/Ats0/3iKyo4Auyl0FFIfGkZZJJvFuP8vNS/UTLUOMAl4bUqbp/yr9WF/FZ3/JSsNsVw/tRptLE3zfDA5ixYDd+FGAdw3odFBwA0QPZB/PIWlIZRFRIrHJvjuStuGjqS47Hm+2HWLlDaf8Hcz/7fHfUVgD9Sv/iWOnsTUg568jDbIOhjb3x+3DOx7YM9qem9bsBvq/fQFwE4C0xDvIiLFY0vr3ce5H+xifxFwSI/b2aa+9S9xlMRUm5KXNQ2yDgbADgd7yh+//g42E+xtYGfR16Hhy8rOATs5dBQiIrIGWwR2YOgoyss293VEC8HG9ritmte/xFESU01KXuKFmA/Gtge7s+lpzAKwdQa3/yKzeWBzQkchIiJrsN+DfTB0FOVlPwBbCbZbH7b1U2o1/0tWSmKqRclLsiDzwRwMdibYt/wTYAO7Heylg4uhqOxGsPeEjkJERNZgF4IdHzqKcrK9/Mk+ZTK4zNtS/UsqJTHVoOSlvUHVwdg6vqvYUU3Ldgf7hz+uLacvc1uVmd0Dtl/oKEREZA12OtipoaMoH5vqE45j+rQ91b+0pSSm3JS8ZDOoOhj7UfzTd5vozwvmny7X+EmMLQfbOXQUIiKyBjsB7PzQUZSLrQP2B7BP9XGbqn/JRElMOSl5yW4QdTB2qD+GpUxkae/39X2/zi+OIrOJPonbKnQkIiKyBpsNNj90FOVhY/yF2HF93q7qXzJTElMuSl46M4g6GLsO7L0Z2p0E9lh+cRSZTfMJzKTQkYiIyBrsELCFoaMoD/sm2Jf7vE3Vv3RMSUw5KHnpTt51MPYY2KsytNuuvhfxNgNsRfpTKhERCcRmgWmm4UxsNth/p7zf5XCbqn/pjpKYYlPy0r2862DsgWwF+raZ+/vVke0Ldl/oKEREJJZN9/2cx4WOpNhsf7BT2rQ5rcttq/6la0piiknJS2/yroOxn4B9O0O7g8HOyieGorMjwG4KHYWIiMSyqb6LwJTQkYRj4/zF1h1gr455f4brC566jR3oejhq1b/0RklMsSh56V2vdTBtj2lbgv0bbGbKNjYFu76+5wY7FuzS0FGIiEgsm+C7L20bOpJwbCYjs1C3PEWxaWAP4kbsuTbm3/Vgf/F3S9/exb5V/9IXSmKKQclL//RSB5N2TBtuszPY3/yF+jpNyyOwA8Eudzdv6spOAvth6ChERCSRLQXbI3QU4dhEd6fN7gR7Rct7lzRdCLT79/Iu9q36l75REhOWkpf+6qUOJu2YNqrdumCfBLsCbAHYPJ84zQFbu7t9V4WdA3Zy6ChERCSRLXJ33GTwVP/SX0piwlDy0n+DmA9Gktk8uh6YRaQ7KkYW6cwSYHLoIGpqd2B+6CCqI5oLBnC2e43UBSR3Nh44D5gOzITo7qDhVMctwDJgV2Be4FjqaDLu3CgyMGNCByBSMotRAhOARbgEZkHoSKolmgscDpyhJzF5U/KSn2gIuBp3jJDBUwIjItI7mwy2EOwLYDuBbej6KNtWYG8C+wHYUV1u+3SwU/sbr7RXlfqXPD+bPcWl7mS56ne3MZsE9lv3/0KcvOeDkWS23A10kOs+ZoCdBfYzX295Adh++e5TRGSgbFpK8fiq3i4Q7QSw8/sXq2RTlfqXPD+bPcemJCYX/UpebBPcZLrHgf3Lf2Z27FeU5ac6mDBsov8sbpXjPj4D9lewrZuWbQl2G9jn8tuviMhA2TQ/qsw5uCF9F/mRY74K9vwetz0bbH5fwpQOVGX+lzw/m32JT0lMX/Utedkd7CqwM8FeB/Y1JTCtep0PRrozfFNmUk7bfxduAumtY97bzr+3Vz77FhEZKJsG9sWctn2I6wIkg1Ol+V/y/Gz2i5KY/shztDE7UQlMnF7mg5Hu2AywFe443fdtbwz2EKmTZNoVYDf2f99SdCriF+mMRiEbvJcAU3BFupI7Ffb3TgX7gSwAZoYOomZ8AX9kOWz73cBGwAUpba4FXk6tJ5iuJyUwIp1ZAmwIpiHIB2cmcCtED4UOpD6UxHRPyUtA84GdVAczUFNwo3PmYTf/mjbC2W3+VV0Ha0YXYSKdWYxL/Dckv4O2jKb5X4LQPDGdU/ISmOaDGbw8h1De1L8+mdLm3/71ZTnFIAWlBEYqzqYAOwFbA/cBl0CUdjBsZwnuam4ySmAGYHj+l4Cjc+Wl75/NHCiJyU7JS3jREFhjPhglMIORZwLTeOqe1ltopX99SU4xSEGpC5lU1TiwI4E3AbcCHpBJcgAAIABJREFU3wOWA78De033m41WAo/jHptL/qpY/5LTZzMv6k7WnpKXAlEdzGDleTOvMWDOOiltGk9pchoFTURkYOwFYH8Bmxrz3qvAVoK9oYftLwI7sPv1JbuqzP/SkPdnM08anSxenqONJe5To5Al0nwwg2XzwObktO0D/ec8Zfv2Gd/mgXxiEBEZGBtL6oztdgnY38AmdLn934N9sLt1pTNVmf+lIe/PZt6UxIwWInkBJTBpNB/MYNmNYO/JadtjwG4CuyqlzXn+/8Ij+cQgRaUuZFJB0VCbEasWANOAQ7vcwWI0lPIADNe/LAgdSf/k/tnMmbqTjVC3sWKKhnBdTncPHUlN5FgDE60GPgRMB5u55vv2Ptz/QYBH84lBikoJjNTRPf71lV2uvwTVwAxCFetf2un1szkASmKUvBSe6mAGJ+cBbaLf40aVOwHsbX5yyylg7wceAP7pGz6cXwxSRBqFTOqoccDbpsv1lwAv7FMskmwm9Zv/pdfP5oDUeXQyJS8lMB/4iquDiZ4IHUx12URgbfIbhcyLbvUDnOwAvA03OtkvIHoQ7HW+UYVqJSULPYGRirH3umI+2yelUWPYxfW73MkS1IVsECo2/8tAPpsDVMcnMUpeSqJ5PhjJT6MnQs4JDED0DEQ3QPQdiH7qkhcAXuxf/5B/DFIkSmCkat4AbALsnNJmA/96T0qbNItRF7KcVbH+ZSCfzQGrUxKj5KU8VAczIJNxN10eDxjDy30MvwwYgwSgLmRSNQuBn/l/SRoTXl3R5T70BCZ/Vax/GcRnM4A6dCdT8lJCC4ADQgdRcb6AP7L8dmE7AA9AFDNMso0D9sV1J6tTV2MRqR57LdhH2rSZD3Zv9/ME2HSwIX/wlFxUbf4XGMxnM6SqDrEcaqjkNBpGuT3NB5M/O8INc5zb9rcGW518LrDDwJaDvSC/GERE+sLG+YuJO8BendDmLLCXJLz3Bn/if1MPMUz121A3styUdf6Xdp/PvD+boVUtiRlU8pLluDaq/Vf8Z2Wn/GIqO80Hkz87FuzSHreR8tm3qWBLXcK+xnpTwf4FdmRv+xcRGQib6U/cBnZaQptNcJNbva1l+ZvB7nd3jXqKYYK/K7Rtb9uReBb5Yvc3h46kc+0+n3l/NougKknMIJ+8ZDmuDbcdC/Yr3/bwfOMqO7so/uJX+sNOAuuxy2jbY+aPwPZq+j4C2xXsdrCP97ZvEZGBsYnujo/dCfaKlHbjwD4Ldi3YFWC/dAdH27pPcSwF26M/25LRbBufIKbMWF9UWT6feX82i6DsScygu421+9zYNLDrwW7xyf1TYMtw3WcW4WZDPz//OMvGjnb/zyQfdg7YyT1uo91nf12w4/yx8gqw28Dmou6TIiLdsEVgB4aOopqqWP9SR2VNYopY8yLdUR1Mvmwe2JzQUUg9qQhZpDsaiSw/FZv/pa7KODqZRhurmOb5YOYFjqWK/ChkIoOneWBEurMYJTA5qOT8LzVWpnlilLxUj+aDyZkSGBGRcrHTwU4NHUX1lLn+RZIVvTuZuo1Vl+pg8mPLwdIm5hURkWKxE1Q0mwfVv1RXUZMYJS/VpjqYfNhEP3LYVqEjERGRzGw22PzQUVRPWed/kWyKlsQoeak+zQeTD5vmE5hJoSMREZHM7BCwhaGjqJYyz/8i2RUliVHyUh+aD6b/bAbYCl+3KCIi5WCzwFS82Feqf6mP0EmMkpd6UR1M/9m+YPeFjkJERDpi08GGwDQUed+o/qVeQiUxSl7qR3Uw/WdHgN0UOgoREemITfX9f6eEjqQ6VP9SP4NOYpS81JPqYPrPjgW7NHQUIiLSEZvguzttGzqSalD9S30NKolR8lJvqoPpLzsJrAST04qISAtbCrZH6CiqQfUv9ZZ3EqPkRVQH0192DtjJoaMQEZGO2SKwA0NHUQ2qf5G8khglLwKqg+k3mwc2J3QUUl8qQBbp3hJgcuggKmJ3YH7oICSkaC4YwNnuNepD9xQbD5wHTAdmQnR379uUkroFWAbsCswLHEsVTMadA0WCGBM6AJESW4wSmD6wCJfALAgdiYQWzQUOB87o/UmMkhdpFg0BV+OONdI7JTAiIuVkp4OdGjqK8lP9i7TqtTuZuo1JHNXB9I8tB9s5dBQiItIxOwHs/NBRlJ/qXyROt0mMkhdJojqY/rCJfhqBrUJHIiIiHbPZYPNDR1F+mv9FknSaxCh5kTSaD6Y/bJpPYCaFjkRERDpmh4AtDB1FuWn+F2knaxKj5EWy0HwwvbMZYCt8/aKIiJSLzQJTEWNPVP8iWbRLYpS8SFaqg+md7Qt2X+goRESkKzYdbAhMw5F3TfUvklVSEqPkRTqhOpje2RFgN4WOQkREumJTfT/gKaEjKS/Vv0gnWpMYJS/SKdXB9M6OBbs0dBRSb407xwcCB8S8/2Pgwpjl7wTeGLP8f4mfIOp9wF4xy08lfvK6jwCvjll+MnBdzPJjgJ1ilp+Am7yq1XHAdjHLPwfcEbP8y8ALY5Z/EvhHzPKTgM1jln8U+HfM8u8QP5/IkcCjMcvPBOLuHh0GLI9Z/iNgfMzyt+FnjmsyFvhJTNsVwKExy9cDzopZ/hjw3pjlGwHfjVn+IPDhmOVTgVNilv8T+ETM8mlAXP/mvwCfjVm+Le7v3upW4Esxy18OHOu+XHcMPAm85hzgcuDrMe13AT4Ws/wq4Nsxy18LvD9m+f8BZ8Qs/w/cvBmtLgLiJgLcHzgoZvl5wM9ilh8M/GfM8nOAX8csfzcQd2HwfdzvqIlF8OQ+8Inb/P6bfQv4Xcx2Pg7MiFn+NeCGmOWfAnaIWf4F4M8Jy1+csJ2/Jex3y4Q447pYfAvYNGb5B4CHYpZ/H9ggZvm7cZPytToHWDtm+cHAMzHLW3/v+HYHxyxf22+/1TIfT6sNcPG3egj387baFPf7aXUf7vfJ6Mkub9sErpkNL3w27DUfbvuab/833N+r1Ytxf99Wf05YvkPCdm7A/d1bzRiJc5TfEf9zzST+93A58b+3vYn/Pf+a+L/LfxL/d/wZ8X/3g3DHh1Y/xB1PWh2OO/60OgN3vGp1JLBnzPLvED8H1Edxk062Ogn4Q8zyTwI7xiz/MvCnkW+jIbCr4aJPE//7/CzufNHqRNz5pdUncOejVqfgzl+tPow737X6Lu782Oq9uPNpq7Nw599Wh+LO161+gju/NzPcdUCr8bjrhlbLcdcZrXPATMRdl7R6FPd3bzUZ93dv9W/c373V5ri/e6t/4P7urV6I+7u3uoP48/12uOvCVrfgriNb7YS77mx1He46tdWrcde1rebjroNb7YW7bm41D3ed3eqNuOvyVhfiruNbHYC77m81F7ggZvk7gDfFLD8buCRm+XuAWTHLTwN+G7P8Q8BrYpZ/A/h9zPJRjsd9kFv/zUlo//WE9nEXoI2g49q/K6H9DxPavzWh/YUJ7fdJaH95QvvdEtpfl9B+ekL7Pye0j0uCAO5OaP+chPYPJbRPeiT+dEL7uAK88Qlt4y6WAJ6d0P6BhPabJ7SPuzgEeElC+6RuRzsltL8mof3MhPZJd5feOLrdUoM9DPh5QvuDErb/g4T270loH3ewB5ccxbX/akL7zyS0jzt4gztpx7WPu1AD+J+E9jEHY9sGVhtsHNc+7sIL4PyE7ccdXAF+k9D+tQntr05o/8qE9jcntN8mof2ihPbPS2j/74T2Gya0fyKh/YSE9nFt4y5+AJ6V0P7hhPabJrSPu9gD2Cqh/e0xYR8CNgRLDLZsbR93cQvuIiJu+1cktN87of0vE9ofkNA+7kIQXAIQ1/57Ce0/lNA+7sIO3I2WuPafT2j/xYT2cRdq4C7Q49rHJWXgkrK49ocltP9xQvu3JLS/KKH969dsakfDbY8ltI9LmgCuT2j/soT2CxPaxyVBAPcktN8kof0jCe3jkhqAlTFtVye0XSth24+7t+0csOYL9Y0S2v8rYftbJLT/a0L7bRPaJ3Vje1VC+6sS2u+Z0D7u4hxgv4T2cTcGAA5JaB+XjIBL+uLax90IATg6oX1c8gUuiYtr/+mE9l9LaB+XbIJLyuLaH5HQ/pyE9nHJ9bAxaW+KSDtLiH94JhnMhHseiX/wIJLExgMHwDOPw/rEP6wXSTUfXjIx+Z6ftDGF0U9gRAau0YXsfGLvcnFbwnrnEn+X6+aE9qcTf5frjwnt/4f4u1xJI4f8N/F3uZLi+SLxd7niuo+By0rj7nj+I6H90bgza6u47mMAHwTWjVke130MXBYbd0c1rvsYuOw/Llm1mGVDxD9ajOt+Aq4PVVz7pxPaP5zQPukJz/0J7ZcmtL8roX3SXeLbE9rHPeIHuHF0+/W/BG+/Gs5LupNyTcL2k+5CX5HQ/q6E9hfjfket4rpAgHtSFPde3P9/cHdB4+5y/SlmGbg7SnF3uW6MWbY7PPN/xD+yTrqLfjLxd7niuo+BuwMV97eJ6z4G7klU3IhoSU8IjyG+i1fSCD0fwT3JaJWUxb2P+C5hSf9f3snIcb1Z0v/fuM9a0l3ZpxPar0xovzSh/VMJ7R9IaP/4yJc2Hvf3nw6X7A5L3g0/+AC87jQ47Grf6JGE7d+ZsP3FCe1vSWifdFf5uoT29yS0n5/Q/u8J7ecltE+6a30h8ceNpKHff4rrOtsqbhm4p8hx3TuS7op/D7gsZnnSdcA3gV/ELE86NnwVd23SKu5YdQvYk/DZb8ExrU/zk46dc4i/Dkg6ls8m/jog6dxyFPHXAXHdx8D1YIm7Dkh6gvp2st+0XkX6dUBrF7InEtonXQcsSWifdFy7N6F90nXAooT2ScfZ2xLaJ/Uk+WNC+6Tj/tUJ7e9OaH9ZQvuk89CviD/O3JnQ/nzijwNJ58UfEv//NKknzBnAlTHLk87T38b9DK3iSkZEpD/sQrDjQ0dRPpr/RTqVVLDf6WSXIoDmg+mB3QO2X+goRESka3Y6WFwRnqTS/C/SiXajjSmJkU5pPpju2XKwnUNHISIiXbMTwM4PHUX5aP4XySrrUMlKYqQTmg+mOzYRN33AVqEjERGRrtlssPmhoygfzf8iWXQ6z4uSGMlK88F0x6b5BGZS6EhERKRrdghYUlGsxFL9i2TR7SSVSmIkK9XBdM5mgK1wx3ERESkpmwWm4SQ7ovoXaafb5GV4fSUxkoHqYDpn+4IljbYlIiLlYNPBhsDihq6VWKp/kTS9Ji/D21ESI22oDqZzdgRY0lDZIiJSDjbV9weeEjqS8lD9iyTpV/IyvD0lMZJCdTCds2PBLg0dhYiI9MQm+O5Q24aOpBxU/yJJ+p28DG9XSYykUB1MZ+wksB+GjkJERHpmS8H2CB1FOaj+ReLklbwMb19JjCRQHUxn7Bywk0NHISIiPbNFYAeGjqIcVP8irfJOXob3oyRGYqgOpjM2D2xO6ChEVHgs0rslwOTQQZTE7sD80EFIUdh44DxgOjATorvz21c0FwzgbPcaqRuMANwCLAN2BeYFjqUMJuPOeSJBjQkdgEgFLEYJTAYW4RKYBaEjkSIYZPLSEM0FDgfO0JMYcaIh4GrcsUnaUwIjIlINdjrYqaGjKD7Vv0jDoLqNJe5f3cmkiepgsrPlYDuHjkJERHpmJ4CdHzqK4lP9i0D45GU4DiUx4qkOJhub6KcN2Cp0JCIi0jObDTY/dBTFp/lfpCjJS4OSGAHNB5OVTfMJzKTQkYiISM/sELCFoaMoNs3/IkVLXhqUxAhoPpgsbAbYCl/PKCIi5WazwFTUmEr1L/VW1OSlQUmMqA6mPdsX7L7QUYiISF/YdLAhMA1Lnkj1L/VV9OSlQUlMvakOpj07Auym0FGIiEhf2FTfL3hK6EiKS/Uv9VSW5KVBSUx9qQ6mPTsW7NLQUYiISF/YBN89atvQkRST6l/qqWzJS4OSmPpSHUw6OwlME8BKIWTt8jIGeDXwVuBFwGNA5P9NxM1e+zPg3ozb+wqwHfAcYFPgXODYzFGLFEq0EuxxYApwe+hoCugluN/N1aEDkUEJMUllv0RzwQDOdq+RLtjqYwFwQOggCmwybuJmkVLYHbgF+B2wJzC25f2NgI8CDwKn+u/beRvwaeB+3Bniv/sVrORiDHAxLkHdI3AsBWWLwA4MHUUxqf6lXsr65KWVnsTUj+pg0tk8sDmhoxBpJwJOAoaAj2dovyHwW2AJsEvGfXwEJTAhTAE+1EH76bi/kwFn5xFQ+dnvwT4YOopiUv1LfVQleWlQElMvqoNJZzeCvSd0FCJpxuIe/xvwzg7Wm4DrTrYceFOG9gehBCaEVwH/00H7McAFwF3AbrlEVHp2IdjxoaMoHtW/1EfVkpcGJTH1ojqYZHYP2H6hoxBJ82VcYvHdLtbdGNc1bBmwTZu2b0MJTAgH0FkCI23Z6WCnho6ieDT/Sz1UNXlpUBJTH5oPJpktB9s5dBQiSfYCVgNLgUldbuMIXGJyBzA+pZ0SmDD+ByUwfWYngJ0fOoriUf1L9VU9eWlQElMPqoOJZxP9dAFbhY5EJMkNuKTihB62MQG4z2/nyJR2SmAGbxxuFBElMH1ls8Hmh46ieFT/Um11SV4alMRUn+pg4tk0n8B0e2NbpK/GtHz/H8CO/uuf9rDdlcCF/uvPsObIZRLOEbihEKW/FuMGR5BhFuFGMVwQOhLJQ5mHSu5WNBc4HDhDSUxVRUO4Id93Dx1JwUzGXds9HjoQEVgzgXmrf30YuLXHbV/pXzcDZvS4LemPtwGnhA6iopagxLCV5n+prDomLw1KYmpgATAzdBAFMxlYApGFDkQERk9kGQH7+K8b3ch6cX3T128ErsmwzouA9+AGAhjCTXT5OHAacFXKeuOA9wJbAav8sjHAusA3cZNmrg/8IGbdCX7dfXCJ27P8fq8ATgYebWn/FWB73+Y5uK52/wO8GHifb7Mj8Gtc17izcYnDCv8zjfP7/APxB8gTgNm+/WpgbeDbwDH+/Um4ST8j//WGuL/VicCf4n45wLtwwyZvw8jTsKNwv+tmv8fN9dPsHcBbmn7eBbQfmW4fv94zwNP+ZwD3+/9twjqfxiW6m/r9nI/7PawFHI1LhNfC/e4fxv1d56XE0MtnohtLgA3BxkH0TJ+2WXYzgVsheih0INJPdU5eGjTZZcXNB77i6j6iJ0IHUxBT0CSWUlBTGZnr45w+bG/tpu0lXWg218AcBnwVd1HZbD/cBIpnED8gwIa4ZOA/Y95bD5dAPI2bbLPVVsBC3MVy8wSczwIuAx4AdmhZ5y24BKIxCedHgVcCZ+ESk8l+f+aXA2wAzGHk97ELaz79algH93sw3FOsHRlJNLfFXThsFhPTo8D3GJ2UxjnMbztrDcyuuATiTr/eT1LabgBcgpv0cmrLe8/DJS/n4f4urd4A/BduqOZGfBvh/jYvbGl7im9zREIcvXwmumRTff9gdSMbpvqX6qlbzUs7qompJtXBrMmOBbs0dBQicV7OyAV2v7oZLfPbS3oy0EhgbscN3ZxkW9zcMj+Pee8nuOQgyTq4BKj1YnUq8C/gJuJrdNbCXbTfx8gThGYf9bEfA5zr9wPwbNwTghW4pzINEfAXv84rUuIF2NrH1jwKylq45GDdhHWO8tv+Zpttd5rANOxPegIzEdftcB7JydkE3JO063A/T5xD/H6+jUsK4woGx+EStvsS9tXtZ6IHNsEPF7xt/7ZZZpr/pXqUvMRTElNNmg9mNDsJTE8apTCaL/6a52lY1dqwS43ttJsDYgj4bMr7t+PmpNmf0aOaRbg793emrLuc+AEJzsJ1VTrS77/VCuBbuETnqJj3G49S3wxc5PcD7sJ6c2CTlrgM94QE1uy61WpX4Hig+dH1Ln67b09Y5zS/7yNxT0L6bUmb90/DJZofxHV9i7MSlzS8Cvh6QpvG73Uv4HfAYzFtnsF1SZzKyFOuhl4+Ez2IGsWNegLjqP6lUtRtLJlqYipKdTCjTUZdyKSgXsHIE5jT+7C9sbgLWQP+nNCm8QQmSx3CDr7t/bg7+eDu4q+mfZe3/Rl9t/3Vfls3t1lvA9/u7pj3GrE/QvLThFYb4S6en2D005VWv2HNblYfYeTv09qFrGGBf3/7lG13+wRmV5KfwDSe3iXVt7S6CZfcxo0nv6ff1krS5xA627fbv2V5t5+JPrBFYAf2d5tlpflfqkNPXrLRk5hq0Xwwo9k8sLSeDSID1fwE5oGmrzfsw7Y3xN0NB/h3m7ZJd+yb/Qk3ueZzgdf4ZSuAa4FDcd24XpSw7pW47lcNjSL0dgMLLMU91XgeyV23/uTjyOJh4Ge4GpuDE9psDywCnmxZPh/XrWw+8GDCuvf61zyewKR5t3+9LmP763DdwNIGA1hE+pPAxnvrtCzv9jPRDxqJbMTuuM+qlJqevGSnJzEVcwuuG/yuoQMpCD8KmUgxNCcw/2JkxK2t+7Dt5sLrpCcwnTBczQPAbk3L3wX8HTfq1V+A23DdmQ4Fnu/bPIq7IG5ojO9+T4b9/hOXiCXNPntfwvIkp/nX9ye8/z5Gupo1uxXXZWoPki/sGyPHPbvDmHq1l3/NenBrtNsrpc0/ug+nq89EPyxGCQya/6UqlLx0TklMdWg+mBZKYKRQmkesGgIuBQ7CDbf7bNYcQrgTzXctLu5hO80e9q/NI1z9FTf87qdxJ47t/L9GrcwfcMlCc3eWRhesPXF1MGnm+39J/3E77RN6Da6mZwdc/UbzcNPrAlv499NMwCUyu/n2D+GeYjUmIU3repWHzf3r0oztG3Utz0tp00sdVjefiX5YgmpgQPUvFaDkpXsaYrlCFgAHhA6iIFQDI4XSOuTuz3EJTOMO6oU9bHsP/7qE9DlcOtGIt7XL2RLgY8AngZ2AnXGF4rv712t8PI1koXGB/wtGnoh0K0v3t1an4WpQjmR0AvM20ocpXhc32MF7cd2kfgJ8jZGZcafgLh4HrfF3yZo4ddq+G51+Jvq1z9Yhn+toJpr/pcSUvPROSUxFzEfzweDrgNZGT2CkQFqHoL0ANy8KuIvkbm0BzPJffxVXkN0PjdHM7vev4xk918cK3IXp14G34u7wH4urkziPkQvnRr1PqOK8c4GncMli8zDBbyZ+qGiAF+C6kX0U181sX+DHjCQv/fB6H0OnGjVOWWunGjU696e26k63n4l+UA2Mo/qX0lLy0j/qTlYBqoNxGj0LlMBIYbQmMKsZmT9jH7q/m/8R3Chk/wRO7XIbrcYz0vXrSv+6PnBcyjpP4xKoL+KSqkYXq8YToc3jVhqAx4C5uCcqjRPbS3HJY1yyNx7XDW8abnb6CxK2G8UsS6rdibM2yYMVpGn8Ptt1x2todAHs15O5Zt1+JvphMbXvQqb6l/JS8tJ/SmLKTXUw3mTctUk/b5iK9CRuEsCLcJMhRrjhlJMmJUwyHfgw7s73AYzMj5ImbiLJVrvghhb+B6NHD9sB2LTNuo1uYo2nHef61z1i2sbZN8M+OtWIqVGX8V7g+wlt98NNivkQcGbKNhuJQXMi88WWNs/417jJOdfDPRnqVGPI4qx3qRrtftTFvrLo5jPRD3oCo/qXklLykh8lMSWn+WCGC/gja9tSZECSkpP/An6Nu9D8cgfb2xh3URrhRoK6IeN6Lyf+6UGzRpe2OYyeeLKxrzSN/3S3+tdLgStwhd2zYtcYMQH4OP0vXvsjbh6a7XGjcU3GjZwVZxv/+ldGEpBWY3C/R3BPIRpaDzh3+9e4p0+b4Z6adepy4DLc04yXt2k7E/ck6Tzc7yAP3Xwm+mEJsCFYP7ullc1MVP9SMkpe8qckpsTmAzvVfD6YKaiAX0pkLK77l+EKxdtdlG2BGy75MeB1GffRmAzy28CnUtrtieve9q2W5Rv59ZeSfuf7KNwFdrNNcBfzdzFSWxPny37/rQ72+/5myrrtvM9v4wHcU5YkWSbNPBr4jm/XPNlU65DMY3Gjuf2bkQlBG+ambP81fttJM9hPwT0duz5lGxNxCcNCkp98zPL7uSjh/Yb/9e1a55Lp5TPRI5sKZmA17kZmPwX7RugoJCtNUjlYmuyyfGws2KNge4eOJBw7FuzS0FGIdOr1uMTkZlxdTGt3r42BY3B9I88iex0EuAvzxqABH8eNGtX8JGY87kLzEdzoW60aF6sfw00Q+eKYNm8FbkyI6znA73Bzhby65b0pwH/jCu1brY3rgmR+21vSXe3Is3C/t3tJ70Y3FjjD7++HLfsag/sdvR9XRP8Qbqb7ybiRtz4Zs70P+G19rmnZm3FJUJzxvq3h5lRJmmem8fv8PWv+LbbDPZG7lORi/wnAiX4/d+GSzDgb44orDfd3aE6Yev1M9MAmgK0G27a/2y0Li8AeAOtmIAgZOCUvYSiJKR+7COzE0FGEYyeBaSQ9KZR23bYaxuK6hrwV1/1nKa4b11jciFL/h7tY7HTywd1xF5Fz/fe74pKa5bgL0S2AO3FPXuImHVwf+AZuJviJuKc4L8Q9YXgGd0G8EPdkImkktAh4C26Sw0m4x6SrcUnTN3Ddtprdhpvd/Rn/bwzu6dQ4/3Mcmu1HH3YK7gnMVzO0/Q/gENyIZIt8jKtxI5c16oK2wz01moor9v8K8UM9vw2XsDyJ67q2EDgppt08XK1Q4+cdi0toFpM8CMJbcIMTjMPV1KyDq4n6Aa5rYpz5uISr+fc63se+O6672fNwn4cxvs2Q38dY/zNsQ38+Ez2wpcCbIbqybdPKsW1wNzumqAtZ0anbWFh2EHA28B4NsVwGdjRwAEQ7h44kDDsHeAiij4eOREREcmGLwA4MHUUY9gGwfk8OKn2nJy/FoCcx5WE7gq2qbx2MzQOb076dyODUudhYJA91HolM878Unp68FIcmuyyR5vlg5gWOJQQ/CplIcXQ6RLKIpFtMLRMYzf9SfEpeikejk5VD7eeDUQIjIlJtdjpYvyZvLRHbxg9gkDainwSjbmPFpu5kxWdHg10bOoowbDlYTet/RERqwU4AOz90FIOn+pfq4+wLAAAgAElEQVTiUvJSDkpiiq2udTA20U8PsFXoSEREJDc2G2x+6CgGT/O/FJOSl3JRElNcdZ0Pxqb5BCZp7jYRESk/OwRsYft2VaL5X4pJyUs5KYkprjrOB2MzwFb4OkcREakmmwVWs2JH1b8Uj5KXclMSU0x1rIOxfcHuCx2FiIjkyqaDDYHVaIhy1b8Ui5KXalASUzx1rIOxI8BuCh2FiIjkyqb6/sJTQkcyOKp/KQ4lL9WiJKZY6lgHY8eCXRo6ChERyZVN8N2ptg0dyWCo/qU4lLxUk5KYYqlbHYydBKZJVkVEqs+Wgu0ROorBUP1LMSh5qTYlMcVRtzoYOwfs5NBRiIhI7mwR2IGhoxgM1b+Ep+SlHpTEFEPd6mBsHtic0FGItKpRobHIwCwBJocOYkB2B+aHDqK+bDxwHjAdmAnR3UHDkRxFc8EAznavkbr1hHELsAzYFZgXOJZBmIw7p4kUypjQAYhU0GJqkcBYhEtgFoSOpJ6UvNRPNBc4HDhDT2JCiYaAq3HHvjpQAiMiUg92OtipoaPIn+pfwlG3sXpTd7Kw6lQHY8vBdg4dhYiI5M5OADs/dBT5U/1LGEpeBJTEhFSXOhib6KcF2Cp0JCIikjubDTY/dBT50/wvg6fkRZopiQmjLvPB2DSfwEwKHYmIiOTODgFbGDqKfGn+l8FT8iJxlMSEUYf5YGwG2Apf7ygiItVms8AqXvSo+pfBUvIiaZTEDF4d6mBsX7D7QkchIiIDYdPBhsAqPEy56l8GR8mLZKEkZrDqUAdjR4DdFDoKEREZCJvq+w1PCR1JflT/MhhKXqQTSmIGpw51MHYs2KWhoxARkYGwCb571bahI8mH6l8GQ8mLdENJzOBUvQ7GTgLThKkiIvVhS8H2CB1FPlT/kj8lL9ILJTGDUfU6GDsH7OTQUYiIyMDYIrADQ0eRD9W/5EvJi/SDkpj8Vb0OxuaBzQkdhUicChcZiwS1BJgcOoic7A7MDx1ENdl44DxgOjAToruDhiMlFs0FAzjbvUbqCtR/twDLgF2BeYFjycNk3LlMpHDGhA5ApKIWU8kExiJcArMgdCTVo+RF+i2aCxwOnKEnMXmIhoCrccfEKlICIyJSL3Y62Kmho+g/1b/kQ93GJE/qTpafKtfB2HKwnUNHISIiA2MngJ0fOor+U/1L/yl5kUFQEpOPqtbB2EQ/HcBWoSMREZGBsdlg80NH0X+a/6W/lLzIICmJ6b+qzgdj03wCMyl0JCIiMjB2CNjC0FH0l+Z/6S8lLxKCkpj+q+J8MDYDbIWvexQpHI1CJpKPJcAUsC2ATYGNcQWRT/vC2oKznYDvAVcBv8UVqj4XmOK/lp6oYF9C0ehkOVgAHOC+tBcBM4G9gGdD9LpgUWVmGwMHAo8CD+LOXy9yr5GFjExERHJj64CdAXY52G1gD4Kt9I/fDWzIF0OuBFscOtpsbG8f9yp/F27IP31ZDLYv2AahIyw2Gw+2ecp7evIigWV5EmPPH1w8ZWUvAvuyH9zkYf+6zB8zHw4dXTa2lz9XPeWP99b073Gwf4LdAHYx2BGhoxURkb6wdcCeaDnox/1bDfbz0NFmYzv5eFt/hqGmhOYOsFNU5BnHPu+T1pbhVZW8SJEkJTEWgX0X7Bn9/45j++PqAR9qSljijvl3hY40G5voj+ntzmFDYN8LHa2IiPSNfd1fsKYd/J8C+0DoSLOx52c4mTX+HRo62mKxDfwFzWp/ceiTGCUvUkStScxw8rIC9wRWXczWYFdlvOD/Q+hIs7PbMt6E2z50pCIi0je2RYYT2mqw7UJHms3wEJpp/1aB3QymCXFHseObktnV/kJwlpIXKa7hJOZQsO+wZhfYF4WOsFhsO9zTqXbHyF+GjjQ7+zprdh9rffpyVegoRUSk7+xif1GfdAJ4olwX+6k/SyOB2TZ0lMVik1izO8lqRmqItgwdoUg8e7u/KG+9MF8Bdm7o6IrHvuyTvrQbVqeFjjI7269NArMS7E2hoxQRkb6zPdvclft16Ag7Yw+n/CxPg30+dITFM1z7knQB8NrQEYqsySL/5CXp+KWnMGuwtcDuIvnJ+9NgXwwdZXa2IfF1j41//wYbGzpKERHJhf0l4STwFNjs0NF1xv6WcjHzD7C1Q0dYLLFPX1rvyK5QEiPFMpy8rEz57K5AtTAx7DUpCcyTYB8LHWFnbFHCz7Ic7OjQ0YmISG7sSOK7FQyB7Rg6us7YtQkns2fAXh06uuKxL6Q8fWlOYpoK+0VCGlWw366eYwiNSBbDzko45j9J6QY4SUxkV7gnNCIiUlG2LvFDKj9VvsfvdmHMz/E02OmhIysee7a/YGl3EdhIAK8JHbEIbg6TxnxP7T63egoTyyYxMpxy67Fyn9DRdcYOjEnGVoB9P3RkIiKSu9ghla8IHVXn7LuM7h7RmKhNk1iuIdPTl6dxk8IdD7Z+6IhFHHsB2P8yMsdTu6cwqoVZgx0YkwQOgb0idGSdsU1j/uYaOllEpB7WGFL5KbBjQ0fVOftiy924VWD7h46qeNrWvihxkRLIlMjoKUwiuyTm9/aC0FF1zu5pScKuDh2RiIgMjF3MyIg+z4DtEjqiztnHmp4qrAT7v9ARFZN9IabbhRIXKam2iYyewsSyzWOewpbw/72d2XTuWgm2X+iIRERkYEYNqbwCbELoiDpnhzZdwDwFtlnoiIrHJrFm7YsSF6mAxERmlZ7CJLGPMFIE/wxYFDqiztk7m34GDZ0sIlIvFuGGGjawP4aOpju2D67/8zNgHwgdTTHZlxjpLrhCiYtUj70E7MeMLvZfracwcWwM2M3+d/RE6Gi6Y1s0/Y3/K3Q0IknGhQ5Ays72BHYKHUVB3QpsCawC+2TgWLqxORABjwHrl/RnyNOzgDnAGGAZ8CXg2xA9GTQqkb6K7gAOxk3K+HngANxxYR7Y94KGVkzzgZcBY0t8zFyFuz6cWOKfIU8rgPMg+lfoQOqshI83pVhsKTApdBQiAZ0MHKfERerBXgLMA54XOhKRgL4JUckmKq0WPYGRXk0A7gU0S2+8CLDQQfSg7PHn6VPADsBXlLxIfUR3gH0H+CpwCnBt4ICKqszHzsbN7bLGn6dpwIm4ax8JSAmM9MPjEJ0fOgiRwbJ34RIYkbq6Tsd+qZeyze1TXWNCByAiIiIiIpKVEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMCIiIiIiEhpKIEREREREZHSUAIjIiIiIiKloQRGRERERERKQwmMiIiIiIiUhhIYEREREREpDSUwIiIiIiJSGkpgRERERESkNJTAiIiIiIhIaSiBERERERGR0lACIyIiIiIipaEERkRERERESkMJjIiIiIiIlIYSGBERERERKQ0lMCIiIiIiUhpKYEREREREpDSUwIiIiIiISGkogRERERERkdJQAiMiIiIiIqWhBEZEREREREpDCYyIiIiIiJSGEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMCIiIiIiEhpKIEREREREZHSUAIjIiIiIiKloQRGRERERERKQwmMiIiIiIiUhhIYEREREREpDSUwIiIiIiJSGkpgRERERESkNJTAiIiIiIhIaSiBERERERGR0lACIyIiIiIipaEERkRERERESkMJjIiIiIiIlIYSGBERERERKQ0lMCIiIiIiUhpKYEREREREpDSUwIiIiIiISGkogRERERERkdJQAiMiIiIiIqWhBEZEREREREpDCYyIiIiIiJSGEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMCIiIiIiEhpKIEREREREZHSUAIjIiIiIiKloQRGRERERERKQwmMiIiIiIiUhhIYkSBsb7BVYMvAHgZb0vLvEbDlYBeHjlREpFrsBWBPgVnLv6fB3pJh/c1828fBHmo6Zi8DWwn2w/x/hk7pnCMi0sSeAvtz6CjKx8aAbQS2Ddg7mk6gK8HeB/ZCsClgY0NHKknsEv83mxw6EpHBsmP8Z//A0JF0z54NNhVsFtgzTcfg6zKuPxXs9WC3+PX+CvYusJeBrZVv7N3QOac/7BX+93Zq6EjqblzoAETqKVoNPOz+2aSmN+ZB9P1AQYmI1ET0KPAo2HOAy4GpwHbAq8CmQ3Rzm/XvB+4H2xA4CdgFoofyjbkXOudItagLmUh4uzd9fVmwKERE6mcP4ArgO03L3t/B+s8Bzix28rIGnXOk9JTAiIS3R9PXVwaLQkSkfvbAHXd/BCzzyw4GWz/j+q8GLskjsBzpnCOlpwRGJCgbD+zqv1kC3B4wGBGRGrFxwPbAzRA9gUtiAJ4FHJJh/QjYEbghpwBzoHOOVIMSGJGwXgms57++EiILGYyISI3sBNwE0ZD//rSm947KsP7LgDsgWtX3yPKjc45UghIYkbD0KF9EJIxG9zEvugX4g/9me7BdOlu/FHTOkUpQAiMSlk4mIiJh7AH8tmVZ81OYdsX8SmBEAtEwyiLB2FrAzv6bf0P0ly6381zgHcCbgPHAU/6N3wOnpI+OY5sB7wZmARGwElgNfBuiX3QXj4hI0dl4YGvgtpY3fgqcAmwAvBVsNkQPx6w/FtgBuLHL/W+Mq7N5M+7YGwErgKuBr0H0dHfbTd1nAc45oPOOiBSAJrLsnu3eNJlYFzM323iwT4E9BnYy2CYt7+8JdlXLmP+N99b26zwJ9m2wLZremwp2A9i7O4+pTjSRpdRVJSayfDXY+QnvfaPp2PzxhDY7gf26i/2OAfsQ2FKwuW4CyeH3tvDH3j+Dbd35ttvuO+A5B6px3tFEliIVoQSme3Z808nkiA7XnQR2OdgTbjboNd5/IdgfwZ4GO6jlvc3B/gS2Cmy/hO3vCXZHZzHVjRIYqatKJDCfBftAwnsvaTo2/9WPNtba5hNgR3e4z7XAzvPbPSahzYv8sfl6l+z0U6hzDlTnvKMERqQilMB0zxY0nUxe0MF6a4P9wa/3joQ2ZzZt+41NyzcGuzP9ziL4u4uPZo+pjpTASF1VIoH5rUtUEt+f33QM3TPm/UvAXt7B/saA/dpv77Q2ba/z7Q7Pvv1MMQQ450C1zjtKYEQqQglMd2wdf6fKwP7Z4bpn+/UWpLQ5FOxhsB+NvotnF/p1b8f14U5a/3NgczuLq26UwEhdlT2BsbXA/t6mzUFNF+QtXc1sHNg9nT0hsS/5bd3t9p/a9se+7VnZt992/4HOOVCt844SGJGKUALTHXtt08mxg5OU7da0XsJj+MR192la9/CUdgeB3Qu2eWfbrxslMFJXpU9gZrokIbXNBLAH/c+5Cuw5Te/NAPtlB/vb2m/DwI7L0H6+b9vHCTJDnHOgeucdJTBFoVHIRMLodijLL/rXJcCvOtxnY2K2p4ELcP261wI2ArbBTcp2AHAvsBtE93a4fRGRMogbPrlFtBLsf4FjcddKRwBfalq/k+P2F/w2VgNnZ2i/TSOIDvbRTohzDui8IzlRAiMSRhcnE5sKvMZ/cwlEq7PvziYCjcLLx4DG3cMngSeAu4Drgf0gejD7dkVESmcP4F0Z2n0f+CQukXgf2IkQDfn1P5FtVzYRN9wwwBUQ3dOm/cZA46luh129Ug34nAM670ielMCIDJytB7zSf3NXB3ecZjJyR+6qDnc6DZjgvz4NouM7XF9EpAJsHeA5EN3Vvm30D7BLgb2BzYF9/PcvBm7NuMPXAGv7ry/P2L7hsoz7aCPIOQd03pEc9XmIPhHJYFfc5F/Q2aP85zV9fWeH+9ys6es+9qsWESmVXYFrOmjfPGLY+4FXAX+EyDKu3zTXCbdnaP8W/7oCuCDjPtoJcc4BnXckR3oCIzJ4zY/y2/TDHmVV09edzqC8rOnrRR2uKyJSFZ3Wr/wauB+YCuwDPNjh+s3aJDA2GWgUyn+rj92qQpxzQOcdyZGewIgMXvPJZH4H6y30r88Aj2RbxWaAvbJpXXCFpFnWfS7YVtnDExEpvA4TmGgION1/MwZ4d2fr0zxK57/atP0EsB7wd+DLHeyjnRDnnOb1QecdESkWDaPcGZvYNJxmhzMO20Zgj/l1p2ZoPwvsZrBJ/vs7/LqvzbDu9mC3FH9Iy5A0jLLUVVmHUbaNwbp4EmBTwZ7xP/ODfiStrOuu33TcfmFKu53BVuBmun9p5zEmbjfgOQeqd97RMMpFoScwIoM1k5Gum1d3tmr0MCPDeL4zuZ2NAXsPMAd4HUSP+Tdm+9cD0vdj++LuOB6gIS1FpEI+BiztfLXofkaGEJ7fQf0LED0OfMZ/k5Dw2ba4epdHgT0gyjpAQBYzCXfOAZ13RKSY9ASmPVvPPxbfxU1MNjyp14/BXgY2BSxjPZqNBzvN/973b3lvY9xsyL8F+zDYhJj1j/frvr1l+QZgB4DNw82GvH5XP2qt6AmM1FWZnsDYJmC7ujvmtto/Sfm4v9s/vv36w9t5vf+Zj2rfdo11x4B93z9d2bdp+TpgHwRbBnYB2GbJ2+hofwU650C1zjt6AiNSEUpg0tlBYENgT4ItBXu06d9Sv/wZsLkdbvc//UnjL2CX+38X+5PIem3W3c3tz64Eu8yv+xuwo90JRbJRAiN1VZYExl7qk5aVLcfgx3Hdqr7YfhvD24rAbgSb1kM8bwT7uT92XwF2Hdh3wWZ0v8019lHAcw5U57yjBEakIpTASF0pgZG6KksCI9JvSmCKQjUwIiIiIiJSGkpgRERERESkNJTAiIiIiIhIaSiBERERERGR0lACIyIiIiIipaEERkRERERESkMJjIiIiIiIlIYSGBERERERKQ0lMCIiIiIiUhpKYEREREREpDSUwIiIiIiISGkogRERERERkdJQAiMiIiIiIqWhBEZEREREREpDCYyIiIiIiJSGEhgRERERESkNJTAiIiIiIlIaSmBERERERKQ0lMBIDdnmYB8DuxDsJrCHwJaB3Q52OthOMevsB3bi4GMVEZH+0LFfpCqUwEiN2D5g1wD/BI4B/gacCOwNbA0cDPwJOBfs62Dj/HqvAn7i3xMRkVLRsV9EREaxp8D+HDqKdPYysMvBzN9peyPY2JT2E8DOBjsN7PlgD4KtBpsysJClBOwS/5maHDoSkcGyY/xn/8DQkaTTsV/6zV7hP0+nho5ERHpS9ATG3gu2AmyVP+mOz7jeGLCFYEv8wUp34KSFEhipqzIkMDr2Sx6UwBSFupBJRdk4sO8B3wdWAm+E6GsQrcq2frQaOB7Y2C+4PIcgRUSkr3TsF6kDJTBSVWcA78OdwPaG6NIutnE+8IT/WicxEZHi07FfpAaUwEgF2QnAYf6bD0J0TXfbiQxX7LkKuLovoYmISE507Bepi3GhAxDpL9sfmOO/uRiiM3rc4D+AZRAt63E7IiKSGx37RepET2CkQuxZwDf9NyuBj/Rho2NRFwIRkQLTsV+kbpTASJUcD2zmv/4FRH/vwzbvBC5IftveC3Yd2G3+DmDze/uALQC7Fux6sCOz7dI2AzsO7Hdu7gK7EuwKsDdnD9veAHaR3+/lYOeBbZlhvQjsJL/PW8HmjB521Db17//Wt7kZNyncS1u2szHYZ8Eu87Hf5EftekX2n0FEJJPjGeixP4/jPujYLyIyMEUZRtnWA3vCD29oYHsNYJ/Twb7jh938NG6+gN38ex8G+xvY9v77jcDuTj8R2dpgJ4M9CfZtsC2a3psKdgPYu9vEtCnYb8AWjz6x2kv9CXEzv53fJaz/JRc7gB3sf5fH+e/f6U/aM0dObPYssBvBngbbzi/b27eb5U6KALaWP/ENge2c/jOUhYZRlroq0jDKgz729/u4Dzr2l4mGURapiMIkMO9oOoH9feTgmes+zwNb33/9Wr/vS91B2v4F9oKmtmf693+WsK3Nwf6Em7Ngv4Q2e4LdkRLPFmD/AHt45AQ66v13+t+Ngf0z5v0p7gQ3/H3jQP0vsPeAnc/wDNWj1vuob/cTF7vNA3t2TLtDRn5HWdkkf/LbJvs6g6IERuqqUAnMgI/9/Tzug479w20isI+A/R43D88isAuSfyehKIERqYjCJDC/aTqJfXUA+9sM7EdN3+/s9/0k7tH6zjHv2eh1ht/fGOxO//7HU/a5E9ijCe+tj7vzZ2AHp7RZ4ducFfP+bLBPNH1/WFPcV4GtnbDd9/k2D+G6PkxIaLffyEkxjW2Cu4N3nD+BGtiO6euEoARG6qpQCcwAj/39PO6Djv3D768L9l2wt4P50gabAvY1v96v4hOjEJTAiFREYRKY+5oOuG8ZwP4+NvrkbUc17f/DLW23ALsD97h9q5htXejXu51RfY7XaPc5sLkJ7/3QbyOhewDgnmY849sdGvP+1e4EPfz9N33bZWBTU7Z7im+3CmzblHYf8+3S7iTu7k+YZ4K9rukEpgRGpDAKlcAM8Njfz+M+6Ng/aju7JLx3tF+3IAMqKIERqYgiJDA2Ade/tnEied4A9nklw90IAOz0kQN02oloje3s0xT34SntDgK7F2zzmPdmNG3jPSnb2K+p3WYx73+x5furfdtvtfkZLvPtzmvTrnGi/UV6u1HrnKgERqRoipLADPrY36/jPujYP/zeVmDfSFl3HK47mYG9KX0/g6AERqQiCpHAbNl0cF7ep21ulH4ybL3TZ7f6/X++w/1cNBK3rY/rB7y2u+tlrwP7L1xR5PkkjiRjv/LbWAm2Ycq+vuXb/SVDXOsx0uXgjSntxoE9nuEkHIE96Nt9sP3+h9dTAiNSOIVJYLYc7LG/X8d90LF/+P1Pgu3aJqYv+fXPbR9/3pTAFIUmspQqeKLp60f6tM3P44bQvCf+7ejnI1/b5kCjcHJe9l3YROD1/pvHgF/6r5/E/Ux3AdcD+0H0YMI21m/axuUQpf38e/rX32YIbiYwARgCrkpptxMw0X99RUq7GcAU//VvMuxfRKSdAR/7+3HcBx37R9kOmAN2HERJT3zu9K8vTtmP1IwSGKmCR4CngbWB9XrfnE0EdgE+3K6l94amOK7vYEfTcCcKgNMgOr6DdRv2AMb7r1NOTrYpsE37dsMaJ8Y/QvR4SrvX+tdFEN2b0u4Q/3p9n+ZoEBEJeezv9rgPOvY3WweY5LeXlMA85V8tZT9SM5rIUiogMmC+/2YS2KQeN3g88C2/3Sz29a+XQTTUwX6a+yLfkNgq3ZZNX9+W0q5xB26IbLNLv86/pt1Zg5GTWMo2bRzQ6GpyToZ9i4hkEPTY3+1xH3Tsb/Zx/++jKfua5l8LMGCQFIUSGKmKnzZ9/frEVm3ZbsBWEJ2dsf1auDthABd3uLNlTV8v6nDdhg2avk47uDdOYtdC1DQcp71rzab2HEYe1aecxGwtYNf27ZgFTAZWMervZM8HOyxlPRGRdgIc+3s67oOO/U3H/ugeiE6BKGZ+mmEzM+xLakYJjFTFuUBjiMaU8fTT2HbAF4AjOljpNbhH4AZ0MEEjAAubvl6dbRV7LqOH5LzLvz4G0f0J60wA9vbfLBi9reHH+80aJ+angd+nBLMrruvGauDKlHYH+NdLIHqoafl7gZUp64mItBPi2N/LcR907O/g2G+b436GO4CEoaSljpTASEVEQ8BhuL6yr3Ijm3TC/gP4BvCWlgNtO7P86x8hWtzZPqPFjBQnxgyR2cq2By4BVjQtzPJI/QPAc/zXzV0N3g5cFNN+pn+9BqIVMe83NO7s3dymgHQv//qTkUU2Hne39Ocx7UVEMgpy7O/huA869nd07P8SrvvbkV101RMRSVKEYZSb2d5gj/lhDk/wB8u09jvhJhM70xdwdrq/m/2+ju8qXBdvhiEZbV8/pGbcRJiX+228IOa9w8B+CvaIb7ND03vXEzsE8PDM0G0uBOxa2s5+bWP8Z8Rct4Hh5Z8Be3/69gENoyxSQEUZRrnZII/9vR73Qcf+LGwvv37G9oOgYZRFKqJoCQy4A/3wBFv/BPsC2FvAtnWPo20vsI/6C9DrwbrsN22TwVb7/byyh3iP97/Ht7cs3wDsALB5uJmY149dHXsJ2ANgP8cVTeLa2olg3/Ankq8walx/+yCxcxfYJozMq/CylJjXwc09YCTOoDzc9qe+nZ/R2d4B9r30dYbXVQIjUjhFTGBgMMf+fh33Qcf+1HWn+p/tmGztB0UJjEhFFDGBabBXgH0N7I+4ibRW+gPijWBfxxVt9rL9yWD3gJ0LFvW4rd3A5uJmer4Md2ftN2BHu5NZ2/U3gf9v745ZqgrjOI7/rlpgQ9ISTUW6BDUELUHvoLagtfcgTSU0BNHY0BA0NdcQjQVBUxFIW0FD0djUELQE9jRcjQjTQL3P+Xs/H7iI3Cv8jsiR770cTnuYtFdJe57xDc4u/fH8zPifVnuXtGcZv0O5yebfx/R462NqhzK+O/KL7Y+9HVn/Hb1N2suk3c74ItD/IGBgeIYaMBv28ty/m+f9xLl/05+bT9qbpN3Y/rWTJmBgnxhywFCfgIHhGXrAUFebSdqTpN3svWRzAmYoXMQPAMAQ3E3yPhnd6j2EYRMwAAB01paT/EhGK/94/vpE5zBoc70HAAAwzdrlJMeT0fIWLzoxqTUMn09ggCEb/fUVgH2lnU9yYet4aWeTfJnUIobPJzDAQLXZJKfXvzmTZLXjGAB2XVtK8jTJ56S93uQFs0kWkiwmuTrBYQycgAEGpC1lfNfmg0mOJTmc5HuS+0lbSfItyadkdKXfRgB2yb0kR9cf2/mwx1soRMAAAzL6mGSHN4cDoIbRxd4LqMk1MAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBkgZwwIAAADRSURBVAAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUIWAAAIAyBAwAAFCGgAEAAMoQMAAAQBkCBgAAKEPAAAAAZQgYAACgDAEDAACUMdd7APvCfNLO9R4BE7bQewB0tujcz5Q51XsAYwKGnVpLsphktfcQ6GSt9wCYsI2/+TvrD5g2P3sPmHYChp26lsQ7cEyrT8noa+8RMGGPkpxMcqD3EOigJXnQe8S0+wWITlA6GPOniQAAAABJRU5ErkJggg==) # $B = \left( \begin{array}{cc} .5 & .5 \\ .2 & .8 \\ .1 & .9 \end{array} \right)$ # # Can we figure out what's happening just from a sequence of observations? # # HMM's in Python # First, we will load some Python modules from hmmlearn import base, hmm # Module for HMMs from matplotlib import pyplot # A plotling module similar to MatLab's plot import numpy # A package for arrays, matrices and linear algebra from math import * # Math might help model = hmm.CategoricalHMM(n_components=3) # Create a HMM with 3 internal states model.n_features = 2 # Number of observed states model.startprob_ = numpy.array([0.350, 0.375, 0.275]) model.transmat_ = numpy.array([[0.6, 0.3, 0.1], [0.3, 0.5, 0.2], [0.1, 0.3, 0.6]]) model.emissionprob_ = numpy.array([[0.5, 0.5], [0.2, 0.8], [0.1, 0.9]]) print(model.startprob_) print(model.transmat_) print(model.emissionprob_) observations = "ININNINNNN" # Convert observations to a column vector of 0's and 1's obsSequence = numpy.array([["IN".find(c)] for c in observations]) def alpha(k): alphatilde = numpy.multiply( model.startprob_, numpy.transpose(model.emissionprob_[:, obsSequence[1]]) )[0] alpha = numpy.divide(alphatilde, sum(alphatilde)) for j in range(1, k + 1): alphatilde = numpy.multiply( numpy.dot(alpha, model.transmat_), numpy.transpose(model.emissionprob_[:, obsSequence[j]]), )[0] alpha = numpy.divide(alphatilde, sum(alphatilde)) return alpha filterResults = numpy.array([alpha(j) for j in range(len(observations))]) print(filterResults) f_hot = filterResults[:, 0] f_warm = filterResults[:, 1] f_cold = filterResults[:, 2] ind = [i for i, _ in enumerate(observations)] pyplot.bar(ind, f_hot, color="red", label="Hot", bottom=f_warm + f_cold) pyplot.bar(ind, f_warm, color="yellow", label="Warm", bottom=f_cold) pyplot.bar(ind, f_cold, color="blue", label="Cold") pyplot.legend(loc="upper left", bbox_to_anchor=(1.05, 1)) pyplot.xticks(ind, list(observations)) print("Filtered values only use the past to explain the current") filterMaxs = numpy.argmax(filterResults, axis=1) print(filterMaxs) print("".join(["HWC"[x] for x in filterMaxs])) # Find the probability of the internal states at each point in time smoothingResults = model.predict_proba(obsSequence) print(smoothingResults) s_hot = smoothingResults[:, 0] s_warm = smoothingResults[:, 1] s_cold = smoothingResults[:, 2] ind = [i for i, _ in enumerate(observations)] pyplot.bar(ind, s_hot, color="red", label="Hot", bottom=s_warm + s_cold) pyplot.bar(ind, s_warm, color="yellow", label="Warm", bottom=s_cold) pyplot.bar(ind, s_cold, color="blue", label="Cold") pyplot.legend(loc="upper left", bbox_to_anchor=(1.05, 1)) _ = pyplot.xticks(ind, list(observations)) smoothingMaxs = numpy.argmax(smoothingResults, axis=1) print(smoothingMaxs) print("".join(["HWC"[x] for x in smoothingMaxs])) # # Viterbi algorithm # Finds the most likely sequence of states that explains the observations. # **Idea** Consider partial paths (dynamic programming again!). These paths consider both past and future observations. logProb, viterbi = model.decode(obsSequence) print(exp(logProb)) print(viterbi) print("".join(["HWC"[x] for x in viterbi])) # # Comparison of results pyplot.subplot(1, 2, 1) pyplot.bar(ind, f_hot, color="red", label="Hot", bottom=f_warm + f_cold) pyplot.bar(ind, f_warm, color="yellow", label="Warm", bottom=f_cold) pyplot.bar(ind, f_cold, color="blue", label="Cold") pyplot.xticks(ind, list(observations)) pyplot.title("Filtering") pyplot.subplot(1, 2, 2) pyplot.bar(ind, s_hot, color="red", label="Hot", bottom=s_warm + s_cold) pyplot.bar(ind, s_warm, color="yellow", label="Warm", bottom=s_cold) pyplot.bar(ind, s_cold, color="blue", label="Cold") pyplot.legend(loc="upper left", bbox_to_anchor=(1.05, 1)) pyplot.xticks(ind, list(observations)) pyplot.title("Smoothig") print("Observations: ", observations) print("Filtering most likely:", "".join(["HWC"[x] for x in filterMaxs])) print("Smoothing most likely:", "".join(["HWC"[x] for x in smoothingMaxs])) print("Most likely sequence: ", "".join(["HWC"[x] for x in viterbi])) # # Learning the HMM using Baum-Welch # 1. Start with random transition and observation matrices. # 2. Fix your transition matrix and find the observation (emmision) matrix that best describes our observations. # 3. Fix you observation matrix and find the observation matrix that best descibes your observations. # Repeat steps 2 and 3 many times to improve you guess. learnedModel = hmm.CategoricalHMM(n_components=3) # Still has 3 internal states learnedModel.n_features = 2 # And 2 observed features learnedModel.n_iter = 10000 learnedModel.tol = 0.01 learnedModel.verbose = False class ThresholdMonitor(base.ConvergenceMonitor): @property def converged(self): return self.iter == self.n_iter or self.history[-1] >= self.tol learnedModel.monitor_ = ThresholdMonitor( learnedModel.n_iter, learnedModel.tol, learnedModel.verbose ) learnedModel.fit(obsSequence) # Create a longer sequence of observations from our original model longSequence = numpy.transpose(model.sample(1000)[0]) # Create a longer sequence of observations from our original model x = learnedModel.fit(longSequence) print("Original and learned transition probabilities") print(model.transmat_) print(learnedModel.transmat_) print("Original and learned observation probabilities") print(model.emissionprob_) print(learnedModel.emissionprob_)
false
0
71,435
0
71,435
71,435
129730622
<jupyter_start><jupyter_text>Twitter Sentiment Analysis ### Context The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets. Formally, given a training sample of tweets and labels, where label '1' denotes the tweet is racist/sexist and label '0' denotes the tweet is not racist/sexist, your objective is to predict the labels on the test dataset. ### Content Full tweet texts are provided with their labels for training data. Mentioned users' username is replaced with @user. Kaggle dataset identifier: twitter-sentiment-analysis-hatred-speech <jupyter_script>import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # ### **Read dataset** train = pd.read_csv("/kaggle/input/twitter-sentiment-analysis-hatred-speech/train.csv") test = pd.read_csv("/kaggle/input/twitter-sentiment-analysis-hatred-speech/test.csv") # ### **Preview dataset** train.head() test.head() # ## **1.1 Count number of words** def num_of_words(df): df["word_count"] = df["tweet"].apply(lambda x: len(str(x).split(" "))) print(df[["tweet", "word_count"]].head()) num_of_words(train) num_of_words(test) # ## **1.2 Count number of characters** def num_of_chars(df): df["char_count"] = df["tweet"].str.len() ## this also includes spaces print(df[["tweet", "char_count"]].head()) num_of_chars(train) num_of_chars(test) # ## **1.3 Average word length** def avg_word(sentence): words = sentence.split() return sum(len(word) for word in words) / len(words) def avg_word_length(df): df["avg_word"] = df["tweet"].apply(lambda x: avg_word(x)) print(df[["tweet", "avg_word"]].head()) avg_word_length(train) avg_word_length(test) # ## **1.4 Number of stopwords** import nltk from nltk.corpus import stopwords print(set(stopwords.words("english"))) from nltk.corpus import stopwords stop = stopwords.words("english") def stop_words(df): df["stopwords"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x in stop]) ) print(df[["tweet", "stopwords"]].head()) stop_words(train) stop_words(test) # ## **1.5 Number of special characters** def hash_tags(df): df["hashtags"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x.startswith("#")]) ) print(df[["tweet", "hashtags"]].head()) hash_tags(train) hash_tags(test) # ## **1.6 Number of numerics** def num_numerics(df): df["numerics"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x.isdigit()]) ) print(df[["tweet", "numerics"]].head()) num_numerics(train) num_numerics(test) # ## **1.7 Number of Uppercase words** def num_uppercase(df): df["upper_case"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x.isupper()]) ) print(df[["tweet", "upper_case"]].head()) num_uppercase(train) num_uppercase(test) # # **2. Basic Text Processing** # ## **2.1 CountVectorization** from sklearn.feature_extraction.text import CountVectorizer corpus = [ "This is the first document.", "This document is the second document.", "And this is the third one.", "Is this the first document?", ] vectorizer = CountVectorizer() X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) print(X.toarray()) vectorizer2 = CountVectorizer(analyzer="word", ngram_range=(2, 2)) X2 = vectorizer2.fit_transform(corpus) print(vectorizer2.get_feature_names()) print(X2.toarray()) # ## **2.2 HashingVectorizer** from sklearn.feature_extraction.text import HashingVectorizer corpus = [ "This is the first document.", "This document is the second document.", "And this is the third one.", "Is this the first document?", ] vectorizer = HashingVectorizer(n_features=2**4) X = vectorizer.fit_transform(corpus) print(X.shape) # ## **2.3 Lower Casing** def lower_case(df): df["tweet"] = df["tweet"].apply(lambda x: " ".join(x.lower() for x in x.split())) print(df["tweet"].head()) lower_case(train) lower_case(test) # ## **2.4 Punctuation Removal** def punctuation_removal(df): df["tweet"] = df["tweet"].str.replace("[^\w\s]", "") print(df["tweet"].head()) punctuation_removal(train) punctuation_removal(test) # ## **2.5 Stop Words Removal** from nltk.corpus import stopwords stop = stopwords.words("english") def stop_words_removal(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join(x for x in x.split() if x not in stop) ) print(df["tweet"].head()) stop_words_removal(train) stop_words_removal(test) # ## **2.6 Frequent Words Removal** freq = pd.Series(" ".join(train["tweet"]).split()).value_counts()[:10] freq freq = list(freq.index) def frequent_words_removal(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join(x for x in x.split() if x not in freq) ) print(df["tweet"].head()) frequent_words_removal(train) frequent_words_removal(test) # ## **2.7 Rare Words Removal** freq = pd.Series(" ".join(train["tweet"]).split()).value_counts()[-10:] freq freq = list(freq.index) def rare_words_removal(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join(x for x in x.split() if x not in freq) ) print(df["tweet"].head()) rare_words_removal(train) rare_words_removal(test) # ## **2.8 Spelling Correction** from textblob import TextBlob def spell_correction(df): return df["tweet"][:5].apply(lambda x: str(TextBlob(x).correct())) spell_correction(train) spell_correction(test) # ## **2.9 Tokenization** def tokens(df): return TextBlob(df["tweet"][1]).words tokens(train) tokens(test) # ## **2.10 Stemming** from nltk.stem import PorterStemmer st = PorterStemmer() def stemming(df): return df["tweet"][:5].apply( lambda x: " ".join([st.stem(word) for word in x.split()]) ) stemming(train) stemming(test) # ## **2.11 Lemmatization** from textblob import Word def lemmatization(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join([Word(word).lemmatize() for word in x.split()]) ) print(df["tweet"].head()) lemmatization(train) lemmatization(test) # # **3. Advanced Text Processing** # ## **3.1 N-grams** from textblob import TextBlob def combination_of_words(df): return TextBlob(df["tweet"][0]).ngrams(2) combination_of_words(train) combination_of_words(test) # ## **3.2 Term Frequency** def term_frequency(df): tf1 = ( (df["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf1.columns = ["words", "tf"] return tf1.head() term_frequency(train) term_frequency(test) # ## **3.3 Inverse Document Frequency (IDF)** tf1 = ( (train["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf1.columns = ["words", "tf"] tf1.head() tf2 = ( (test["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf2.columns = ["words", "tf"] tf2.head() # ## **3.4 Term Frequency – Inverse Document Frequency (TF-IDF)** tf1 = ( (train["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf1.columns = ["words", "tf"] for i, word in enumerate(tf1["words"]): tf1.loc[i, "idf"] = np.log( train.shape[0] / (len(train[train["tweet"].str.contains(word)])) ) tf1["tfidf"] = tf1["tf"] * tf1["idf"] tf1 from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer( max_features=1000, lowercase=True, analyzer="word", stop_words="english", ngram_range=(1, 1), ) train_vect = tfidf.fit_transform(train["tweet"]) train_vect # ## **3.5 Bag of Words** from sklearn.feature_extraction.text import CountVectorizer bow = CountVectorizer( max_features=1000, lowercase=True, ngram_range=(1, 1), analyzer="word" ) train_bow = bow.fit_transform(train["tweet"]) train_bow # ## **3.6 Sentiment Analysis** def polarity_subjectivity(df): return df["tweet"][:5].apply(lambda x: TextBlob(x).sentiment) polarity_subjectivity(train) polarity_subjectivity(test) # - We can can see that it returns a tuple representing polarity and subjectivity of each tweet. Here, we only extract polarity as it indicates the sentiment as value nearer to 1 means a positive sentiment and values nearer to -1 means a negative sentiment. This can also work as a feature for building a machine learning model. def sentiment_analysis(df): df["sentiment"] = df["tweet"].apply(lambda x: TextBlob(x).sentiment[0]) return df[["tweet", "sentiment"]].head() sentiment_analysis(train) sentiment_analysis(test)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730622.ipynb
twitter-sentiment-analysis-hatred-speech
arkhoshghalb
[{"Id": 129730622, "ScriptId": 35085112, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:54:21", "VersionNumber": 1.0, "Title": "NLP 2", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 321.0, "LinesInsertedFromPrevious": 31.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 290.0, "LinesInsertedFromFork": 31.0, "LinesDeletedFromFork": 436.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 290.0, "TotalVotes": 0}]
[{"Id": 186074668, "KernelVersionId": 129730622, "SourceDatasetVersionId": 239192}]
[{"Id": 239192, "DatasetId": 100982, "DatasourceVersionId": 250971, "CreatorUserId": 2144642, "LicenseName": "Unknown", "CreationDate": "01/06/2019 05:00:19", "VersionNumber": 1.0, "Title": "Twitter Sentiment Analysis", "Slug": "twitter-sentiment-analysis-hatred-speech", "Subtitle": "Detecting hatred tweets, provided by Analytics Vidhya", "Description": "### Context\n\nThe objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets.\n\nFormally, given a training sample of tweets and labels, where label '1' denotes the tweet is racist/sexist and label '0' denotes the tweet is not racist/sexist, your objective is to predict the labels on the test dataset.\n\n\n### Content\n\nFull tweet texts are provided with their labels for training data. \nMentioned users' username is replaced with @user. \n\n\n### Acknowledgements\n\nDataset is provided by [Analytics Vidhya](http://https://datahack.analyticsvidhya.com/contest/practice-problem-twitter-sentiment-analysis/)", "VersionNotes": "Initial release", "TotalCompressedBytes": 4738708.0, "TotalUncompressedBytes": 1960105.0}]
[{"Id": 100982, "CreatorUserId": 2144642, "OwnerUserId": 2144642.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 239192.0, "CurrentDatasourceVersionId": 250971.0, "ForumId": 110673, "Type": 2, "CreationDate": "01/06/2019 05:00:19", "LastActivityDate": "01/06/2019", "TotalViews": 161178, "TotalDownloads": 25885, "TotalVotes": 254, "TotalKernels": 187}]
[{"Id": 2144642, "UserName": "arkhoshghalb", "DisplayName": "Ali Toosi", "RegisterDate": "08/11/2018", "PerformanceTier": 1}]
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # ### **Read dataset** train = pd.read_csv("/kaggle/input/twitter-sentiment-analysis-hatred-speech/train.csv") test = pd.read_csv("/kaggle/input/twitter-sentiment-analysis-hatred-speech/test.csv") # ### **Preview dataset** train.head() test.head() # ## **1.1 Count number of words** def num_of_words(df): df["word_count"] = df["tweet"].apply(lambda x: len(str(x).split(" "))) print(df[["tweet", "word_count"]].head()) num_of_words(train) num_of_words(test) # ## **1.2 Count number of characters** def num_of_chars(df): df["char_count"] = df["tweet"].str.len() ## this also includes spaces print(df[["tweet", "char_count"]].head()) num_of_chars(train) num_of_chars(test) # ## **1.3 Average word length** def avg_word(sentence): words = sentence.split() return sum(len(word) for word in words) / len(words) def avg_word_length(df): df["avg_word"] = df["tweet"].apply(lambda x: avg_word(x)) print(df[["tweet", "avg_word"]].head()) avg_word_length(train) avg_word_length(test) # ## **1.4 Number of stopwords** import nltk from nltk.corpus import stopwords print(set(stopwords.words("english"))) from nltk.corpus import stopwords stop = stopwords.words("english") def stop_words(df): df["stopwords"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x in stop]) ) print(df[["tweet", "stopwords"]].head()) stop_words(train) stop_words(test) # ## **1.5 Number of special characters** def hash_tags(df): df["hashtags"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x.startswith("#")]) ) print(df[["tweet", "hashtags"]].head()) hash_tags(train) hash_tags(test) # ## **1.6 Number of numerics** def num_numerics(df): df["numerics"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x.isdigit()]) ) print(df[["tweet", "numerics"]].head()) num_numerics(train) num_numerics(test) # ## **1.7 Number of Uppercase words** def num_uppercase(df): df["upper_case"] = df["tweet"].apply( lambda x: len([x for x in x.split() if x.isupper()]) ) print(df[["tweet", "upper_case"]].head()) num_uppercase(train) num_uppercase(test) # # **2. Basic Text Processing** # ## **2.1 CountVectorization** from sklearn.feature_extraction.text import CountVectorizer corpus = [ "This is the first document.", "This document is the second document.", "And this is the third one.", "Is this the first document?", ] vectorizer = CountVectorizer() X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) print(X.toarray()) vectorizer2 = CountVectorizer(analyzer="word", ngram_range=(2, 2)) X2 = vectorizer2.fit_transform(corpus) print(vectorizer2.get_feature_names()) print(X2.toarray()) # ## **2.2 HashingVectorizer** from sklearn.feature_extraction.text import HashingVectorizer corpus = [ "This is the first document.", "This document is the second document.", "And this is the third one.", "Is this the first document?", ] vectorizer = HashingVectorizer(n_features=2**4) X = vectorizer.fit_transform(corpus) print(X.shape) # ## **2.3 Lower Casing** def lower_case(df): df["tweet"] = df["tweet"].apply(lambda x: " ".join(x.lower() for x in x.split())) print(df["tweet"].head()) lower_case(train) lower_case(test) # ## **2.4 Punctuation Removal** def punctuation_removal(df): df["tweet"] = df["tweet"].str.replace("[^\w\s]", "") print(df["tweet"].head()) punctuation_removal(train) punctuation_removal(test) # ## **2.5 Stop Words Removal** from nltk.corpus import stopwords stop = stopwords.words("english") def stop_words_removal(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join(x for x in x.split() if x not in stop) ) print(df["tweet"].head()) stop_words_removal(train) stop_words_removal(test) # ## **2.6 Frequent Words Removal** freq = pd.Series(" ".join(train["tweet"]).split()).value_counts()[:10] freq freq = list(freq.index) def frequent_words_removal(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join(x for x in x.split() if x not in freq) ) print(df["tweet"].head()) frequent_words_removal(train) frequent_words_removal(test) # ## **2.7 Rare Words Removal** freq = pd.Series(" ".join(train["tweet"]).split()).value_counts()[-10:] freq freq = list(freq.index) def rare_words_removal(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join(x for x in x.split() if x not in freq) ) print(df["tweet"].head()) rare_words_removal(train) rare_words_removal(test) # ## **2.8 Spelling Correction** from textblob import TextBlob def spell_correction(df): return df["tweet"][:5].apply(lambda x: str(TextBlob(x).correct())) spell_correction(train) spell_correction(test) # ## **2.9 Tokenization** def tokens(df): return TextBlob(df["tweet"][1]).words tokens(train) tokens(test) # ## **2.10 Stemming** from nltk.stem import PorterStemmer st = PorterStemmer() def stemming(df): return df["tweet"][:5].apply( lambda x: " ".join([st.stem(word) for word in x.split()]) ) stemming(train) stemming(test) # ## **2.11 Lemmatization** from textblob import Word def lemmatization(df): df["tweet"] = df["tweet"].apply( lambda x: " ".join([Word(word).lemmatize() for word in x.split()]) ) print(df["tweet"].head()) lemmatization(train) lemmatization(test) # # **3. Advanced Text Processing** # ## **3.1 N-grams** from textblob import TextBlob def combination_of_words(df): return TextBlob(df["tweet"][0]).ngrams(2) combination_of_words(train) combination_of_words(test) # ## **3.2 Term Frequency** def term_frequency(df): tf1 = ( (df["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf1.columns = ["words", "tf"] return tf1.head() term_frequency(train) term_frequency(test) # ## **3.3 Inverse Document Frequency (IDF)** tf1 = ( (train["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf1.columns = ["words", "tf"] tf1.head() tf2 = ( (test["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf2.columns = ["words", "tf"] tf2.head() # ## **3.4 Term Frequency – Inverse Document Frequency (TF-IDF)** tf1 = ( (train["tweet"][1:2]) .apply(lambda x: pd.value_counts(x.split(" "))) .sum(axis=0) .reset_index() ) tf1.columns = ["words", "tf"] for i, word in enumerate(tf1["words"]): tf1.loc[i, "idf"] = np.log( train.shape[0] / (len(train[train["tweet"].str.contains(word)])) ) tf1["tfidf"] = tf1["tf"] * tf1["idf"] tf1 from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer( max_features=1000, lowercase=True, analyzer="word", stop_words="english", ngram_range=(1, 1), ) train_vect = tfidf.fit_transform(train["tweet"]) train_vect # ## **3.5 Bag of Words** from sklearn.feature_extraction.text import CountVectorizer bow = CountVectorizer( max_features=1000, lowercase=True, ngram_range=(1, 1), analyzer="word" ) train_bow = bow.fit_transform(train["tweet"]) train_bow # ## **3.6 Sentiment Analysis** def polarity_subjectivity(df): return df["tweet"][:5].apply(lambda x: TextBlob(x).sentiment) polarity_subjectivity(train) polarity_subjectivity(test) # - We can can see that it returns a tuple representing polarity and subjectivity of each tweet. Here, we only extract polarity as it indicates the sentiment as value nearer to 1 means a positive sentiment and values nearer to -1 means a negative sentiment. This can also work as a feature for building a machine learning model. def sentiment_analysis(df): df["sentiment"] = df["tweet"].apply(lambda x: TextBlob(x).sentiment[0]) return df[["tweet", "sentiment"]].head() sentiment_analysis(train) sentiment_analysis(test)
false
0
2,740
0
2,921
2,740
129730870
# ### check normal distribution import scipy.stats as stat import pylab import numpy as np import seaborn as sns import matplotlib.pyplot as plt df = sns.load_dataset("iris") df.head() def plot_data(df, feature): plt.figure(figsize=(10, 6)) plt.subplot(1, 2, 1) df[feature].hist() plt.subplot(1, 2, 2) stat.probplot(df[feature], dist="norm", plot=pylab) plt.show() plot_data(df, "sepal_width") # smooth def plot_data(df, feature): plt.figure(figsize=(10, 6)) plt.subplot(1, 2, 1) sns.histplot(df[feature], kde=True) plt.subplot(1, 2, 2) stat.probplot(df[feature], dist="norm", plot=pylab) plt.show() plot_data(df, "sepal_width") # not gaussian dist plot_data(df, "petal_length")
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730870.ipynb
null
null
[{"Id": 129730870, "ScriptId": 38580940, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14487588, "CreationDate": "05/16/2023 04:57:13", "VersionNumber": 1.0, "Title": "check normal distribution", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 34.0, "LinesInsertedFromPrevious": 34.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 8}]
null
null
null
null
# ### check normal distribution import scipy.stats as stat import pylab import numpy as np import seaborn as sns import matplotlib.pyplot as plt df = sns.load_dataset("iris") df.head() def plot_data(df, feature): plt.figure(figsize=(10, 6)) plt.subplot(1, 2, 1) df[feature].hist() plt.subplot(1, 2, 2) stat.probplot(df[feature], dist="norm", plot=pylab) plt.show() plot_data(df, "sepal_width") # smooth def plot_data(df, feature): plt.figure(figsize=(10, 6)) plt.subplot(1, 2, 1) sns.histplot(df[feature], kde=True) plt.subplot(1, 2, 2) stat.probplot(df[feature], dist="norm", plot=pylab) plt.show() plot_data(df, "sepal_width") # not gaussian dist plot_data(df, "petal_length")
false
0
270
8
270
270
129730573
import os import shutil import time from torchvision import datasets import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision.transforms import transforms import torch.optim as optim from torchvision.models import resnet34, resnet18, resnet50 import numpy as np # linear algebra import pandas as pa device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) # Might have to change this according to kaggle def collect_data(): batch_size_train = 128 # matters tune batch_size_test = 1 transforms_data = transforms.Compose( [ transforms.Resize((200, 200)), transforms.CenterCrop(200), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ] ) transform_test = transforms.Compose( [ transforms.Resize(200), transforms.ToTensor(), ] ) trains_data = torchvision.datasets.ImageFolder( "/kaggle/input/birds23wi/birds/train", transforms_data ) # print(len(trains_data)) training_size = int(len(trains_data) * 0.8) validation_size = len(trains_data) - training_size training_data, validation_data = torch.utils.data.random_split( trains_data, [training_size, validation_size] ) training_loader = torch.utils.data.DataLoader( training_data, batch_size_train, shuffle=True, num_workers=2 ) validation_loader = torch.utils.data.DataLoader( validation_data, batch_size_train, shuffle=True, num_workers=2 ) testing_data = torchvision.datasets.ImageFolder( "/kaggle/input/birds23wi/birds/test", transform_test ) testing_loader = torch.utils.data.DataLoader( testing_data, batch_size_test, num_workers=2 ) classes = open("/kaggle/input/birds23wi/birds/names.txt").read().strip().split("\n") class_to_idx = trains_data.class_to_idx idx_to_class = {int(v): int(k) for k, v in class_to_idx.items()} idx_to_name = {k: classes[v] for k, v in idx_to_class.items()} return { "train": training_loader, "validation": validation_loader, "test": testing_loader, "to_class": idx_to_class, "to_name": idx_to_name, } data = collect_data() # print(data) # lr and num of epochs batch size def train(net, dataloader, epochs=1, lr=0.01, momentum=0.9, decay=0.0, verbose=1): net.to(device) net.train() # net.train() losses = [] criterion = nn.CrossEntropyLoss() optimizer = optim.SGD( net.parameters(), lr=lr, momentum=momentum, weight_decay=decay ) for epoch in range(epochs): sum_loss = 0.0 for i, batch in enumerate(dataloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = batch[0].to(device), batch[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() # autograd magic, computes all the partial derivatives optimizer.step() # takes a step in gradient direction # print statistics losses.append(loss.item()) sum_loss += loss.item() if i % 100 == 99: # print every 100 mini-batches if verbose: print("[%d, %5d] loss: %.3f" % (epoch + 1, i + 1, sum_loss / 100)) sum_loss = 0.0 curr_acc = accuracy(net, dataloader) print(curr_acc) return losses def accuracy(net, dataloader): correct = 0 total = 0 with torch.no_grad(): for batch in dataloader: images, labels = batch[0].to(device), batch[1].to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() return correct / total def smooth(x, size): return np.convolve(x, np.ones(size) / size, mode="valid") """#resnet = torch.hub.load('pytorch/vision', 'resnet=34', pretrained=True) # entrypoints = torch.hub.list('pytorch/vision', force_reload=True) #resnet.fc = nn.Linear(500, 550) #losses = train(resnet, data['train'], 25, 0.01) model = resnet18(weights=None).to(device) model.fc = nn.Sequential( nn.Linear(512, 128), # larger than 128 nn.ReLU(inplace=True), nn.Linear(128, 555).to(device) ) losses = train(model, data['train'], 25, 0.01)""" # def get_bird_data(augmentation=0): # transform_train = transforms.Compose([ # transforms.Resize(128), # transforms.RandomCrop(128, padding=8, padding_mode='edge'), # Take 128x128 crops from padded images # transforms.RandomHorizontalFlip(), # 50% of time flip image along y-axis # transforms.ToTensor(), # ]) # transform_test = transforms.Compose([ # transforms.Resize(128), # transforms.ToTensor(), # ]) # trainset = torchvision.datasets.ImageFolder(root='/kaggle/input/birds23wi/birds/train', transform=transform_train) # trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2) # testset = torchvision.datasets.ImageFolder(root='/kaggle/input/birds23wi/birds/test', transform=transform_test) # testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=2) # classes = open("/kaggle/input/birds23wi/birds/names.txt").read().strip().split("\n") # class_to_idx = trainset.class_to_idx # idx_to_class = {int(v): int(k) for k, v in class_to_idx.items()} # idx_to_name = {k: classes[v] for k,v in idx_to_class.items()} # return {'train': trainloader, 'test': testloader, 'to_class': idx_to_class, 'to_name':idx_to_name} # data = get_bird_data() # def train(net, dataloader, epochs=1, start_epoch=0, lr=0.01, momentum=0.9, decay=0.0005, # verbose=1, print_every=10, state=None, schedule={}, checkpoint_path=None): # net.to(device) # net.train() # losses = [] # criterion = nn.CrossEntropyLoss() # optimizer = optim.SGD(net.parameters(), lr=lr, momentum=momentum, weight_decay=decay) # # Load previous training state # if state: # net.load_state_dict(state['net']) # optimizer.load_state_dict(state['optimizer']) # start_epoch = state['epoch'] # losses = state['losses'] # # Fast forward lr schedule through already trained epochs # for epoch in range(start_epoch): # if epoch in schedule: # print ("Learning rate: %f"% schedule[epoch]) # for g in optimizer.param_groups: # g['lr'] = schedule[epoch] # for epoch in range(start_epoch, epochs): # sum_loss = 0.0 # # Update learning rate when scheduled # if epoch in schedule: # print ("Learning rate: %f"% schedule[epoch]) # for g in optimizer.param_groups: # g['lr'] = schedule[epoch] # for i, batch in enumerate(dataloader, 0): # inputs, labels = batch[0].to(device), batch[1].to(device) # optimizer.zero_grad() # outputs = net(inputs) # loss = criterion(outputs, labels) # loss.backward() # autograd magic, computes all the partial derivatives # optimizer.step() # takes a step in gradient direction # losses.append(loss.item()) # sum_loss += loss.item() # if i % print_every == print_every-1: # print every 10 mini-batches # if verbose: # print('[%d, %5d] loss: %.3f' % (epoch, i + 1, sum_loss / print_every)) # sum_loss = 0.0 # if checkpoint_path: # state = {'epoch': epoch+1, 'net': net.state_dict(), 'optimizer': optimizer.state_dict(), 'losses': losses} # torch.save(state, checkpoint_path + 'checkpoint-%d.pkl'%(epoch+1)) # return losses resnet = torch.hub.load("pytorch/vision:v0.6.0", "resnet34", pretrained=True) resnet.fc = nn.Linear(512, 555) # This will reinitialize the layer as well losses = train(resnet, data["train"], epochs=1, lr=0.01, momentum=0.9, decay=0.0005) val_accuracy = accuracy(resnet, data["validation"]) print(val_accuracy) plt.plot(smooth(losses, 50)) # def predict(net, dataloader, ofname): # out = open(ofname, 'w') # out.write("path,class\n") # net.to(device) # net.eval() # correct = 0 # total = 0 # with torch.no_grad(): # for i, (images, labels) in enumerate(dataloader, 0): # if i%100 == 0: # print(i) # images, labels = images.to(device), labels.to(device) # outputs = net(images) # _, predicted = torch.max(outputs.data, 1) # fname, _ = dataloader.dataset.samples[i] # print("test/{},{}\n".format(fname.split('/')[-1], data['to_class'][predicted.item()])) # out.close() # predict(resnet, data['test'], "preds.csv") # model = resnet18(weights=None).to(device) # model.fc = nn.Sequential( # nn.Linear(512, 128), # larger than 128 # nn.ReLU(inplace=True), # nn.Linear(128, 555).to(device) # ) # losses = train(model, data['train'], epochs=1, lr=.01, print_every=10)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730573.ipynb
null
null
[{"Id": 129730573, "ScriptId": 38580948, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13661503, "CreationDate": "05/16/2023 04:53:44", "VersionNumber": 1.0, "Title": "Bird Classifier", "EvaluationDate": "05/16/2023", "IsChange": false, "TotalLines": 235.0, "LinesInsertedFromPrevious": 0.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 235.0, "LinesInsertedFromFork": 0.0, "LinesDeletedFromFork": 0.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 235.0, "TotalVotes": 0}]
null
null
null
null
import os import shutil import time from torchvision import datasets import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision.transforms import transforms import torch.optim as optim from torchvision.models import resnet34, resnet18, resnet50 import numpy as np # linear algebra import pandas as pa device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) # Might have to change this according to kaggle def collect_data(): batch_size_train = 128 # matters tune batch_size_test = 1 transforms_data = transforms.Compose( [ transforms.Resize((200, 200)), transforms.CenterCrop(200), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ] ) transform_test = transforms.Compose( [ transforms.Resize(200), transforms.ToTensor(), ] ) trains_data = torchvision.datasets.ImageFolder( "/kaggle/input/birds23wi/birds/train", transforms_data ) # print(len(trains_data)) training_size = int(len(trains_data) * 0.8) validation_size = len(trains_data) - training_size training_data, validation_data = torch.utils.data.random_split( trains_data, [training_size, validation_size] ) training_loader = torch.utils.data.DataLoader( training_data, batch_size_train, shuffle=True, num_workers=2 ) validation_loader = torch.utils.data.DataLoader( validation_data, batch_size_train, shuffle=True, num_workers=2 ) testing_data = torchvision.datasets.ImageFolder( "/kaggle/input/birds23wi/birds/test", transform_test ) testing_loader = torch.utils.data.DataLoader( testing_data, batch_size_test, num_workers=2 ) classes = open("/kaggle/input/birds23wi/birds/names.txt").read().strip().split("\n") class_to_idx = trains_data.class_to_idx idx_to_class = {int(v): int(k) for k, v in class_to_idx.items()} idx_to_name = {k: classes[v] for k, v in idx_to_class.items()} return { "train": training_loader, "validation": validation_loader, "test": testing_loader, "to_class": idx_to_class, "to_name": idx_to_name, } data = collect_data() # print(data) # lr and num of epochs batch size def train(net, dataloader, epochs=1, lr=0.01, momentum=0.9, decay=0.0, verbose=1): net.to(device) net.train() # net.train() losses = [] criterion = nn.CrossEntropyLoss() optimizer = optim.SGD( net.parameters(), lr=lr, momentum=momentum, weight_decay=decay ) for epoch in range(epochs): sum_loss = 0.0 for i, batch in enumerate(dataloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = batch[0].to(device), batch[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() # autograd magic, computes all the partial derivatives optimizer.step() # takes a step in gradient direction # print statistics losses.append(loss.item()) sum_loss += loss.item() if i % 100 == 99: # print every 100 mini-batches if verbose: print("[%d, %5d] loss: %.3f" % (epoch + 1, i + 1, sum_loss / 100)) sum_loss = 0.0 curr_acc = accuracy(net, dataloader) print(curr_acc) return losses def accuracy(net, dataloader): correct = 0 total = 0 with torch.no_grad(): for batch in dataloader: images, labels = batch[0].to(device), batch[1].to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() return correct / total def smooth(x, size): return np.convolve(x, np.ones(size) / size, mode="valid") """#resnet = torch.hub.load('pytorch/vision', 'resnet=34', pretrained=True) # entrypoints = torch.hub.list('pytorch/vision', force_reload=True) #resnet.fc = nn.Linear(500, 550) #losses = train(resnet, data['train'], 25, 0.01) model = resnet18(weights=None).to(device) model.fc = nn.Sequential( nn.Linear(512, 128), # larger than 128 nn.ReLU(inplace=True), nn.Linear(128, 555).to(device) ) losses = train(model, data['train'], 25, 0.01)""" # def get_bird_data(augmentation=0): # transform_train = transforms.Compose([ # transforms.Resize(128), # transforms.RandomCrop(128, padding=8, padding_mode='edge'), # Take 128x128 crops from padded images # transforms.RandomHorizontalFlip(), # 50% of time flip image along y-axis # transforms.ToTensor(), # ]) # transform_test = transforms.Compose([ # transforms.Resize(128), # transforms.ToTensor(), # ]) # trainset = torchvision.datasets.ImageFolder(root='/kaggle/input/birds23wi/birds/train', transform=transform_train) # trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2) # testset = torchvision.datasets.ImageFolder(root='/kaggle/input/birds23wi/birds/test', transform=transform_test) # testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=2) # classes = open("/kaggle/input/birds23wi/birds/names.txt").read().strip().split("\n") # class_to_idx = trainset.class_to_idx # idx_to_class = {int(v): int(k) for k, v in class_to_idx.items()} # idx_to_name = {k: classes[v] for k,v in idx_to_class.items()} # return {'train': trainloader, 'test': testloader, 'to_class': idx_to_class, 'to_name':idx_to_name} # data = get_bird_data() # def train(net, dataloader, epochs=1, start_epoch=0, lr=0.01, momentum=0.9, decay=0.0005, # verbose=1, print_every=10, state=None, schedule={}, checkpoint_path=None): # net.to(device) # net.train() # losses = [] # criterion = nn.CrossEntropyLoss() # optimizer = optim.SGD(net.parameters(), lr=lr, momentum=momentum, weight_decay=decay) # # Load previous training state # if state: # net.load_state_dict(state['net']) # optimizer.load_state_dict(state['optimizer']) # start_epoch = state['epoch'] # losses = state['losses'] # # Fast forward lr schedule through already trained epochs # for epoch in range(start_epoch): # if epoch in schedule: # print ("Learning rate: %f"% schedule[epoch]) # for g in optimizer.param_groups: # g['lr'] = schedule[epoch] # for epoch in range(start_epoch, epochs): # sum_loss = 0.0 # # Update learning rate when scheduled # if epoch in schedule: # print ("Learning rate: %f"% schedule[epoch]) # for g in optimizer.param_groups: # g['lr'] = schedule[epoch] # for i, batch in enumerate(dataloader, 0): # inputs, labels = batch[0].to(device), batch[1].to(device) # optimizer.zero_grad() # outputs = net(inputs) # loss = criterion(outputs, labels) # loss.backward() # autograd magic, computes all the partial derivatives # optimizer.step() # takes a step in gradient direction # losses.append(loss.item()) # sum_loss += loss.item() # if i % print_every == print_every-1: # print every 10 mini-batches # if verbose: # print('[%d, %5d] loss: %.3f' % (epoch, i + 1, sum_loss / print_every)) # sum_loss = 0.0 # if checkpoint_path: # state = {'epoch': epoch+1, 'net': net.state_dict(), 'optimizer': optimizer.state_dict(), 'losses': losses} # torch.save(state, checkpoint_path + 'checkpoint-%d.pkl'%(epoch+1)) # return losses resnet = torch.hub.load("pytorch/vision:v0.6.0", "resnet34", pretrained=True) resnet.fc = nn.Linear(512, 555) # This will reinitialize the layer as well losses = train(resnet, data["train"], epochs=1, lr=0.01, momentum=0.9, decay=0.0005) val_accuracy = accuracy(resnet, data["validation"]) print(val_accuracy) plt.plot(smooth(losses, 50)) # def predict(net, dataloader, ofname): # out = open(ofname, 'w') # out.write("path,class\n") # net.to(device) # net.eval() # correct = 0 # total = 0 # with torch.no_grad(): # for i, (images, labels) in enumerate(dataloader, 0): # if i%100 == 0: # print(i) # images, labels = images.to(device), labels.to(device) # outputs = net(images) # _, predicted = torch.max(outputs.data, 1) # fname, _ = dataloader.dataset.samples[i] # print("test/{},{}\n".format(fname.split('/')[-1], data['to_class'][predicted.item()])) # out.close() # predict(resnet, data['test'], "preds.csv") # model = resnet18(weights=None).to(device) # model.fc = nn.Sequential( # nn.Linear(512, 128), # larger than 128 # nn.ReLU(inplace=True), # nn.Linear(128, 555).to(device) # ) # losses = train(model, data['train'], epochs=1, lr=.01, print_every=10)
false
0
2,885
0
2,885
2,885
129730600
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # # **Count Vectorizer** import pandas as pd from sklearn.feature_extraction.text import CountVectorizer text = [ "Hello my name is james", "james this is my python notebook", "james trying to create a big dataset", "james of words to try differnt", "features of count vectorizer", ] coun_vect = CountVectorizer() count_matrix = coun_vect.fit_transform(text) count_array = count_matrix.toarray() df = pd.DataFrame(data=count_array, columns=coun_vect.get_feature_names()) print(df) text = ["hello my name is james", "Hello my name is James"] coun_vect = CountVectorizer(lowercase=False) count_matrix = coun_vect.fit_transform(text) count_array = count_matrix.toarray() df = pd.DataFrame(data=count_array, columns=coun_vect.get_feature_names()) print(df) # # **TF-IDF** # import required module from sklearn.feature_extraction.text import TfidfVectorizer # assign documents d0 = "this is my python notebook" d1 = "python notebook" d2 = "python" # merge documents into a single corpus string = [d0, d1, d2] # create object tfidf = TfidfVectorizer() # get tf-df values result = tfidf.fit_transform(string) # get idf values print("\nidf values:") for ele1, ele2 in zip(tfidf.get_feature_names(), tfidf.idf_): print(ele1, ":", ele2) # get indexing print("\nWord indexes:") print(tfidf.vocabulary_) # display tf-idf values print("\ntf-idf value:") print(result) # in matrix form print("\ntf-idf values in matrix form:") print(result.toarray())
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730600.ipynb
null
null
[{"Id": 129730600, "ScriptId": 35085630, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:54:04", "VersionNumber": 1.0, "Title": "NLP 1", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 72.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # # **Count Vectorizer** import pandas as pd from sklearn.feature_extraction.text import CountVectorizer text = [ "Hello my name is james", "james this is my python notebook", "james trying to create a big dataset", "james of words to try differnt", "features of count vectorizer", ] coun_vect = CountVectorizer() count_matrix = coun_vect.fit_transform(text) count_array = count_matrix.toarray() df = pd.DataFrame(data=count_array, columns=coun_vect.get_feature_names()) print(df) text = ["hello my name is james", "Hello my name is James"] coun_vect = CountVectorizer(lowercase=False) count_matrix = coun_vect.fit_transform(text) count_array = count_matrix.toarray() df = pd.DataFrame(data=count_array, columns=coun_vect.get_feature_names()) print(df) # # **TF-IDF** # import required module from sklearn.feature_extraction.text import TfidfVectorizer # assign documents d0 = "this is my python notebook" d1 = "python notebook" d2 = "python" # merge documents into a single corpus string = [d0, d1, d2] # create object tfidf = TfidfVectorizer() # get tf-df values result = tfidf.fit_transform(string) # get idf values print("\nidf values:") for ele1, ele2 in zip(tfidf.get_feature_names(), tfidf.idf_): print(ele1, ":", ele2) # get indexing print("\nWord indexes:") print(tfidf.vocabulary_) # display tf-idf values print("\ntf-idf value:") print(result) # in matrix form print("\ntf-idf values in matrix form:") print(result.toarray())
false
0
652
0
652
652
129730643
<jupyter_start><jupyter_text>Spam Text Message Classification ### Context Coming Soon ### Content Coming Soon Kaggle dataset identifier: spam-text-message-classification <jupyter_script>import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) df = pd.read_csv( "/kaggle/input/spam-text-message-classification/SPAM text message 20170820 - Data.csv" ) df.head() df["Category"].value_counts() # spam = df[df['Message'].str.contains("win" and "free")] # spam['Category'].value_counts() ham_message_length = [] spam_message_length = [] for i in df.values: if i[0] == "ham": ham_message_length.append(len(i[1])) else: spam_message_length.append(len(i[1])) print(ham_message_length[:10]) print(spam_message_length[:10]) import pandas as pd from gensim.models.word2vec import Word2Vec from sklearn.model_selection import train_test_split from keras.utils import to_categorical from keras.layers import ( Dense, Dropout, Conv1D, MaxPool1D, GlobalMaxPool1D, Embedding, Activation, ) from keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from keras.models import Sequential import re import nltk from nltk.corpus import stopwords from nltk.stem.snowball import PorterStemmer from sklearn import preprocessing def preprocess_text(sen): # Remove punctuations and numbers sentence = re.sub("[^a-zA-Z]", " ", sen) # Single character removal sentence = re.sub(r"\s+[a-zA-Z]\s+", " ", sentence) # Removing multiple spaces sentence = re.sub(r"\s+", " ", sentence) stops = stopwords.words("english") # print(stops) porter = PorterStemmer() for word in sentence.split(): if word in stops: sentence = sentence.replace(word, "") sentence = sentence.replace(word, porter.stem(word)) return sentence.lower() df["Message"] = df["Message"].apply(preprocess_text) df.head() mes = [] for i in df["Message"]: mes.append(i.split()) for i in range(5): print(mes[i]) word2vec_model = Word2Vec(mes, vector_size=500, window=3, min_count=1, workers=16) print(word2vec_model) token = Tokenizer(7229) token.fit_on_texts(df["Message"]) text = token.texts_to_sequences(df["Message"]) text = pad_sequences(text, 75) le = preprocessing.LabelEncoder() y = le.fit_transform(df["Category"]) y = to_categorical(y) # x_train, x_test, y_train, y_test = train_test_split(np.array(text), y, test_size=0.2, stratify=y) trainData, testData, trainTruth, testTruth = train_test_split( np.array(text), y, test_size=0.2, stratify=y ) import tensorflow as tf ann = tf.keras.models.Sequential() ann.add(tf.keras.layers.Dense(units=110, activation="relu")) ann.add(tf.keras.layers.Dense(units=110, activation="relu")) ann.add(tf.keras.layers.Dense(units=2, activation="sigmoid")) ann.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) ann.fit(trainData, trainTruth, batch_size=32, epochs=100) y_pred = ann.predict(testData) y_test_class = np.argmax(testTruth, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import classification_report, confusion_matrix print(classification_report(y_test_class, y_pred_class)) print(confusion_matrix(y_test_class, y_pred_class)) # Evaluate model score = ann.evaluate(testData, testTruth, verbose=0) # print("Test loss:", score[0]) print("Test accuracy:", score[1])
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730643.ipynb
spam-text-message-classification
null
[{"Id": 129730643, "ScriptId": 35541702, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:54:38", "VersionNumber": 1.0, "Title": "NLP 3", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 105.0, "LinesInsertedFromPrevious": 105.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
[{"Id": 186074696, "KernelVersionId": 129730643, "SourceDatasetVersionId": 3494}]
[{"Id": 3494, "DatasetId": 2050, "DatasourceVersionId": 3494, "CreatorUserId": 847012, "LicenseName": "CC0: Public Domain", "CreationDate": "08/20/2017 06:32:31", "VersionNumber": 1.0, "Title": "Spam Text Message Classification", "Slug": "spam-text-message-classification", "Subtitle": "Let's battle with annoying spammer with data science.", "Description": "### Context\n\nComing Soon\n\n### Content\n\nComing Soon\n\n### Acknowledgements\nSpecial thanks to;\nhttp://www.dt.fee.unicamp.br/~tiago/smsspamcollection/\n\n### Inspiration\n\nComing soon", "VersionNotes": "Initial release", "TotalCompressedBytes": 485702.0, "TotalUncompressedBytes": 485702.0}]
[{"Id": 2050, "CreatorUserId": 847012, "OwnerUserId": NaN, "OwnerOrganizationId": 912.0, "CurrentDatasetVersionId": 3494.0, "CurrentDatasourceVersionId": 3494.0, "ForumId": 5797, "Type": 2, "CreationDate": "08/20/2017 06:32:31", "LastActivityDate": "02/03/2018", "TotalViews": 92580, "TotalDownloads": 15318, "TotalVotes": 129, "TotalKernels": 165}]
null
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) df = pd.read_csv( "/kaggle/input/spam-text-message-classification/SPAM text message 20170820 - Data.csv" ) df.head() df["Category"].value_counts() # spam = df[df['Message'].str.contains("win" and "free")] # spam['Category'].value_counts() ham_message_length = [] spam_message_length = [] for i in df.values: if i[0] == "ham": ham_message_length.append(len(i[1])) else: spam_message_length.append(len(i[1])) print(ham_message_length[:10]) print(spam_message_length[:10]) import pandas as pd from gensim.models.word2vec import Word2Vec from sklearn.model_selection import train_test_split from keras.utils import to_categorical from keras.layers import ( Dense, Dropout, Conv1D, MaxPool1D, GlobalMaxPool1D, Embedding, Activation, ) from keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from keras.models import Sequential import re import nltk from nltk.corpus import stopwords from nltk.stem.snowball import PorterStemmer from sklearn import preprocessing def preprocess_text(sen): # Remove punctuations and numbers sentence = re.sub("[^a-zA-Z]", " ", sen) # Single character removal sentence = re.sub(r"\s+[a-zA-Z]\s+", " ", sentence) # Removing multiple spaces sentence = re.sub(r"\s+", " ", sentence) stops = stopwords.words("english") # print(stops) porter = PorterStemmer() for word in sentence.split(): if word in stops: sentence = sentence.replace(word, "") sentence = sentence.replace(word, porter.stem(word)) return sentence.lower() df["Message"] = df["Message"].apply(preprocess_text) df.head() mes = [] for i in df["Message"]: mes.append(i.split()) for i in range(5): print(mes[i]) word2vec_model = Word2Vec(mes, vector_size=500, window=3, min_count=1, workers=16) print(word2vec_model) token = Tokenizer(7229) token.fit_on_texts(df["Message"]) text = token.texts_to_sequences(df["Message"]) text = pad_sequences(text, 75) le = preprocessing.LabelEncoder() y = le.fit_transform(df["Category"]) y = to_categorical(y) # x_train, x_test, y_train, y_test = train_test_split(np.array(text), y, test_size=0.2, stratify=y) trainData, testData, trainTruth, testTruth = train_test_split( np.array(text), y, test_size=0.2, stratify=y ) import tensorflow as tf ann = tf.keras.models.Sequential() ann.add(tf.keras.layers.Dense(units=110, activation="relu")) ann.add(tf.keras.layers.Dense(units=110, activation="relu")) ann.add(tf.keras.layers.Dense(units=2, activation="sigmoid")) ann.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) ann.fit(trainData, trainTruth, batch_size=32, epochs=100) y_pred = ann.predict(testData) y_test_class = np.argmax(testTruth, axis=1) y_pred_class = np.argmax(y_pred, axis=1) from sklearn.metrics import classification_report, confusion_matrix print(classification_report(y_test_class, y_pred_class)) print(confusion_matrix(y_test_class, y_pred_class)) # Evaluate model score = ann.evaluate(testData, testTruth, verbose=0) # print("Test loss:", score[0]) print("Test accuracy:", score[1])
false
0
1,070
0
1,115
1,070
129730525
<jupyter_start><jupyter_text>Heart Attack Analysis & Prediction Dataset ## Hone your analytical and ML skills by participating in tasks of my other dataset's. Given below. [Data Science Job Posting on Glassdoor](https://www.kaggle.com/rashikrahmanpritom/data-science-job-posting-on-glassdoor) [Groceries dataset for Market Basket Analysis(MBA)](https://www.kaggle.com/rashikrahmanpritom/groceries-dataset-for-market-basket-analysismba) [Dataset for Facial recognition using ML approach](https://www.kaggle.com/rashikrahmanpritom/dataset-for-facial-recognition-using-ml-approach) [Covid_w/wo_Pneumonia Chest Xray](https://www.kaggle.com/rashikrahmanpritom/covid-wwo-pneumonia-chest-xray) [Disney Movies 1937-2016 Gross Income](https://www.kaggle.com/rashikrahmanpritom/disney-movies-19372016-total-gross) [Bollywood Movie data from 2000 to 2019](https://www.kaggle.com/rashikrahmanpritom/bollywood-movie-data-from-2000-to-2019) [17.7K English song data from 2008-2017](https://www.kaggle.com/rashikrahmanpritom/177k-english-song-data-from-20082017) ## About this dataset - Age : Age of the patient - Sex : Sex of the patient - exang: exercise induced angina (1 = yes; 0 = no) - ca: number of major vessels (0-3) - cp : Chest Pain type chest pain type - Value 1: typical angina - Value 2: atypical angina - Value 3: non-anginal pain - Value 4: asymptomatic - trtbps : resting blood pressure (in mm Hg) - chol : cholestoral in mg/dl fetched via BMI sensor - fbs : (fasting blood sugar &gt; 120 mg/dl) (1 = true; 0 = false) - rest_ecg : resting electrocardiographic results - Value 0: normal - Value 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of &gt; 0.05 mV) - Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteria - thalach : maximum heart rate achieved - target : 0= less chance of heart attack 1= more chance of heart attack n Kaggle dataset identifier: heart-attack-analysis-prediction-dataset <jupyter_code>import pandas as pd df = pd.read_csv('heart-attack-analysis-prediction-dataset/heart.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 303 entries, 0 to 302 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 303 non-null int64 1 sex 303 non-null int64 2 cp 303 non-null int64 3 trtbps 303 non-null int64 4 chol 303 non-null int64 5 fbs 303 non-null int64 6 restecg 303 non-null int64 7 thalachh 303 non-null int64 8 exng 303 non-null int64 9 oldpeak 303 non-null float64 10 slp 303 non-null int64 11 caa 303 non-null int64 12 thall 303 non-null int64 13 output 303 non-null int64 dtypes: float64(1), int64(13) memory usage: 33.3 KB <jupyter_text>Examples: { "age": 63.0, "sex": 1.0, "cp": 3.0, "trtbps": 145.0, "chol": 233.0, "fbs": 1.0, "restecg": 0.0, "thalachh": 150.0, "exng": 0.0, "oldpeak": 2.3, "slp": 0.0, "caa": 0.0, "thall": 1.0, "output": 1.0 } { "age": 37.0, "sex": 1.0, "cp": 2.0, "trtbps": 130.0, "chol": 250.0, "fbs": 0.0, "restecg": 1.0, "thalachh": 187.0, "exng": 0.0, "oldpeak": 3.5, "slp": 0.0, "caa": 0.0, "thall": 2.0, "output": 1.0 } { "age": 41.0, "sex": 0.0, "cp": 1.0, "trtbps": 130.0, "chol": 204.0, "fbs": 0.0, "restecg": 0.0, "thalachh": 172.0, "exng": 0.0, "oldpeak": 1.4, "slp": 2.0, "caa": 0.0, "thall": 2.0, "output": 1.0 } { "age": 56.0, "sex": 1.0, "cp": 1.0, "trtbps": 120.0, "chol": 236.0, "fbs": 0.0, "restecg": 1.0, "thalachh": 178.0, "exng": 0.0, "oldpeak": 0.8, "slp": 2.0, "caa": 0.0, "thall": 2.0, "output": 1.0 } <jupyter_script>import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session heart = pd.read_csv("/kaggle/input/heart-attack-analysis-prediction-dataset/heart.csv") heart.head() heart.info() heart.describe() heart.nunique() heart.isnull().sum() # EDA-Exploratory Data Analysis # Data Visualization import matplotlib.pyplot as plt import seaborn as sns # Let us now observe how correlated our features are sns.histplot(data=heart, x="age", kde=True) sns.histplot(data=heart, x="trtbps", kde=True) sns.heatmap(heart.corr()) sns.boxplot(data=heart, x="output", y="age") sns.histplot(heart["output"]) plt.grid(True) sns.countplot(data=heart, x="output", hue="sex") heart["sex"].value_counts() # In above code we observe that male members in our population is more than females . sns.pairplot(heart) col = heart.columns for col_name in col: if heart[col_name].dtypes == "int64" or heart[col_name].dtypes == "float64": plt.hist(heart[col_name]) plt.xlabel(col_name) plt.ylabel("count") plt.show() for col_name in col: if heart[col_name].dtypes == "int64" or heart[col_name].dtypes == "float64": plt.boxplot(heart[col_name]) plt.xlabel(col_name) plt.ylabel("count") plt.show() # Feature Scaling from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(heart.drop("output", axis=1)) # LogisticsRegression x = heart.drop(columns=["output"]) y = heart["output"] from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.3, random_state=25 ) from sklearn.linear_model import LogisticRegression log = LogisticRegression() log.fit(x_train, y_train) pred = log.predict(x_test) pred from sklearn.metrics import classification_report, confusion_matrix print(classification_report(y_test, pred)) print(confusion_matrix(y_test, pred)) _, axes = plt.subplots(ncols=2) sns.histplot(y_test, ax=axes[0]) sns.histplot(pred, ax=axes[1])
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730525.ipynb
heart-attack-analysis-prediction-dataset
rashikrahmanpritom
[{"Id": 129730525, "ScriptId": 38577733, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 11841435, "CreationDate": "05/16/2023 04:53:00", "VersionNumber": 1.0, "Title": "LogisticRegressionHeartattack", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 113.0, "LinesInsertedFromPrevious": 113.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 1}]
[{"Id": 186074517, "KernelVersionId": 129730525, "SourceDatasetVersionId": 2047221}]
[{"Id": 2047221, "DatasetId": 1226038, "DatasourceVersionId": 2087216, "CreatorUserId": 4730101, "LicenseName": "CC0: Public Domain", "CreationDate": "03/22/2021 11:40:59", "VersionNumber": 2.0, "Title": "Heart Attack Analysis & Prediction Dataset", "Slug": "heart-attack-analysis-prediction-dataset", "Subtitle": "A dataset for heart attack classification", "Description": "## Hone your analytical and ML skills by participating in tasks of my other dataset's. Given below.\n\n\n[Data Science Job Posting on Glassdoor](https://www.kaggle.com/rashikrahmanpritom/data-science-job-posting-on-glassdoor)\n\n[Groceries dataset for Market Basket Analysis(MBA)](https://www.kaggle.com/rashikrahmanpritom/groceries-dataset-for-market-basket-analysismba)\n\n[Dataset for Facial recognition using ML approach](https://www.kaggle.com/rashikrahmanpritom/dataset-for-facial-recognition-using-ml-approach)\n\n[Covid_w/wo_Pneumonia Chest Xray](https://www.kaggle.com/rashikrahmanpritom/covid-wwo-pneumonia-chest-xray)\n\n[Disney Movies 1937-2016 Gross Income](https://www.kaggle.com/rashikrahmanpritom/disney-movies-19372016-total-gross)\n\n[Bollywood Movie data from 2000 to 2019](https://www.kaggle.com/rashikrahmanpritom/bollywood-movie-data-from-2000-to-2019)\n\n[17.7K English song data from 2008-2017](https://www.kaggle.com/rashikrahmanpritom/177k-english-song-data-from-20082017)\n\n## About this dataset\n\n- Age : Age of the patient\n\n- Sex : Sex of the patient\n\n- exang: exercise induced angina (1 = yes; 0 = no)\n\n- ca: number of major vessels (0-3)\n\n- cp : Chest Pain type chest pain type\n - Value 1: typical angina\n - Value 2: atypical angina\n - Value 3: non-anginal pain\n - Value 4: asymptomatic\n \n- trtbps : resting blood pressure (in mm Hg)\n- chol : cholestoral in mg/dl fetched via BMI sensor\n- fbs : (fasting blood sugar &gt; 120 mg/dl) (1 = true; 0 = false)\n- rest_ecg : resting electrocardiographic results\n - Value 0: normal\n - Value 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of &gt; 0.05 mV)\n - Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteria\n \n- thalach : maximum heart rate achieved\n- target : 0= less chance of heart attack 1= more chance of heart attack\n\nn", "VersionNotes": "heart csv update", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 1226038, "CreatorUserId": 4730101, "OwnerUserId": 4730101.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2047221.0, "CurrentDatasourceVersionId": 2087216.0, "ForumId": 1244179, "Type": 2, "CreationDate": "03/22/2021 08:19:12", "LastActivityDate": "03/22/2021", "TotalViews": 870835, "TotalDownloads": 138216, "TotalVotes": 3197, "TotalKernels": 1050}]
[{"Id": 4730101, "UserName": "rashikrahmanpritom", "DisplayName": "Rashik Rahman", "RegisterDate": "03/24/2020", "PerformanceTier": 3}]
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session heart = pd.read_csv("/kaggle/input/heart-attack-analysis-prediction-dataset/heart.csv") heart.head() heart.info() heart.describe() heart.nunique() heart.isnull().sum() # EDA-Exploratory Data Analysis # Data Visualization import matplotlib.pyplot as plt import seaborn as sns # Let us now observe how correlated our features are sns.histplot(data=heart, x="age", kde=True) sns.histplot(data=heart, x="trtbps", kde=True) sns.heatmap(heart.corr()) sns.boxplot(data=heart, x="output", y="age") sns.histplot(heart["output"]) plt.grid(True) sns.countplot(data=heart, x="output", hue="sex") heart["sex"].value_counts() # In above code we observe that male members in our population is more than females . sns.pairplot(heart) col = heart.columns for col_name in col: if heart[col_name].dtypes == "int64" or heart[col_name].dtypes == "float64": plt.hist(heart[col_name]) plt.xlabel(col_name) plt.ylabel("count") plt.show() for col_name in col: if heart[col_name].dtypes == "int64" or heart[col_name].dtypes == "float64": plt.boxplot(heart[col_name]) plt.xlabel(col_name) plt.ylabel("count") plt.show() # Feature Scaling from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(heart.drop("output", axis=1)) # LogisticsRegression x = heart.drop(columns=["output"]) y = heart["output"] from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.3, random_state=25 ) from sklearn.linear_model import LogisticRegression log = LogisticRegression() log.fit(x_train, y_train) pred = log.predict(x_test) pred from sklearn.metrics import classification_report, confusion_matrix print(classification_report(y_test, pred)) print(confusion_matrix(y_test, pred)) _, axes = plt.subplots(ncols=2) sns.histplot(y_test, ax=axes[0]) sns.histplot(pred, ax=axes[1])
[{"heart-attack-analysis-prediction-dataset/heart.csv": {"column_names": "[\"age\", \"sex\", \"cp\", \"trtbps\", \"chol\", \"fbs\", \"restecg\", \"thalachh\", \"exng\", \"oldpeak\", \"slp\", \"caa\", \"thall\", \"output\"]", "column_data_types": "{\"age\": \"int64\", \"sex\": \"int64\", \"cp\": \"int64\", \"trtbps\": \"int64\", \"chol\": \"int64\", \"fbs\": \"int64\", \"restecg\": \"int64\", \"thalachh\": \"int64\", \"exng\": \"int64\", \"oldpeak\": \"float64\", \"slp\": \"int64\", \"caa\": \"int64\", \"thall\": \"int64\", \"output\": \"int64\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 303 entries, 0 to 302\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 303 non-null int64 \n 1 sex 303 non-null int64 \n 2 cp 303 non-null int64 \n 3 trtbps 303 non-null int64 \n 4 chol 303 non-null int64 \n 5 fbs 303 non-null int64 \n 6 restecg 303 non-null int64 \n 7 thalachh 303 non-null int64 \n 8 exng 303 non-null int64 \n 9 oldpeak 303 non-null float64\n 10 slp 303 non-null int64 \n 11 caa 303 non-null int64 \n 12 thall 303 non-null int64 \n 13 output 303 non-null int64 \ndtypes: float64(1), int64(13)\nmemory usage: 33.3 KB\n", "summary": "{\"age\": {\"count\": 303.0, \"mean\": 54.366336633663366, \"std\": 9.082100989837857, \"min\": 29.0, \"25%\": 47.5, \"50%\": 55.0, \"75%\": 61.0, \"max\": 77.0}, \"sex\": {\"count\": 303.0, \"mean\": 0.6831683168316832, \"std\": 0.46601082333962385, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}, \"cp\": {\"count\": 303.0, \"mean\": 0.966996699669967, \"std\": 1.0320524894832985, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 3.0}, \"trtbps\": {\"count\": 303.0, \"mean\": 131.62376237623764, \"std\": 17.5381428135171, \"min\": 94.0, \"25%\": 120.0, \"50%\": 130.0, \"75%\": 140.0, \"max\": 200.0}, \"chol\": {\"count\": 303.0, \"mean\": 246.26402640264027, \"std\": 51.83075098793003, \"min\": 126.0, \"25%\": 211.0, \"50%\": 240.0, \"75%\": 274.5, \"max\": 564.0}, \"fbs\": {\"count\": 303.0, \"mean\": 0.1485148514851485, \"std\": 0.35619787492797644, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 0.0, \"max\": 1.0}, \"restecg\": {\"count\": 303.0, \"mean\": 0.528052805280528, \"std\": 0.525859596359298, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 2.0}, \"thalachh\": {\"count\": 303.0, \"mean\": 149.64686468646866, \"std\": 22.905161114914094, \"min\": 71.0, \"25%\": 133.5, \"50%\": 153.0, \"75%\": 166.0, \"max\": 202.0}, \"exng\": {\"count\": 303.0, \"mean\": 0.32673267326732675, \"std\": 0.4697944645223165, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 1.0}, \"oldpeak\": {\"count\": 303.0, \"mean\": 1.0396039603960396, \"std\": 1.1610750220686348, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.8, \"75%\": 1.6, \"max\": 6.2}, \"slp\": {\"count\": 303.0, \"mean\": 1.3993399339933994, \"std\": 0.6162261453459619, \"min\": 0.0, \"25%\": 1.0, \"50%\": 1.0, \"75%\": 2.0, \"max\": 2.0}, \"caa\": {\"count\": 303.0, \"mean\": 0.7293729372937293, \"std\": 1.022606364969327, \"min\": 0.0, \"25%\": 0.0, \"50%\": 0.0, \"75%\": 1.0, \"max\": 4.0}, \"thall\": {\"count\": 303.0, \"mean\": 2.3135313531353137, \"std\": 0.6122765072781409, \"min\": 0.0, \"25%\": 2.0, \"50%\": 2.0, \"75%\": 3.0, \"max\": 3.0}, \"output\": {\"count\": 303.0, \"mean\": 0.5445544554455446, \"std\": 0.4988347841643913, \"min\": 0.0, \"25%\": 0.0, \"50%\": 1.0, \"75%\": 1.0, \"max\": 1.0}}", "examples": "{\"age\":{\"0\":63,\"1\":37,\"2\":41,\"3\":56},\"sex\":{\"0\":1,\"1\":1,\"2\":0,\"3\":1},\"cp\":{\"0\":3,\"1\":2,\"2\":1,\"3\":1},\"trtbps\":{\"0\":145,\"1\":130,\"2\":130,\"3\":120},\"chol\":{\"0\":233,\"1\":250,\"2\":204,\"3\":236},\"fbs\":{\"0\":1,\"1\":0,\"2\":0,\"3\":0},\"restecg\":{\"0\":0,\"1\":1,\"2\":0,\"3\":1},\"thalachh\":{\"0\":150,\"1\":187,\"2\":172,\"3\":178},\"exng\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"oldpeak\":{\"0\":2.3,\"1\":3.5,\"2\":1.4,\"3\":0.8},\"slp\":{\"0\":0,\"1\":0,\"2\":2,\"3\":2},\"caa\":{\"0\":0,\"1\":0,\"2\":0,\"3\":0},\"thall\":{\"0\":1,\"1\":2,\"2\":2,\"3\":2},\"output\":{\"0\":1,\"1\":1,\"2\":1,\"3\":1}}"}}]
true
1
<start_data_description><data_path>heart-attack-analysis-prediction-dataset/heart.csv: <column_names> ['age', 'sex', 'cp', 'trtbps', 'chol', 'fbs', 'restecg', 'thalachh', 'exng', 'oldpeak', 'slp', 'caa', 'thall', 'output'] <column_types> {'age': 'int64', 'sex': 'int64', 'cp': 'int64', 'trtbps': 'int64', 'chol': 'int64', 'fbs': 'int64', 'restecg': 'int64', 'thalachh': 'int64', 'exng': 'int64', 'oldpeak': 'float64', 'slp': 'int64', 'caa': 'int64', 'thall': 'int64', 'output': 'int64'} <dataframe_Summary> {'age': {'count': 303.0, 'mean': 54.366336633663366, 'std': 9.082100989837857, 'min': 29.0, '25%': 47.5, '50%': 55.0, '75%': 61.0, 'max': 77.0}, 'sex': {'count': 303.0, 'mean': 0.6831683168316832, 'std': 0.46601082333962385, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}, 'cp': {'count': 303.0, 'mean': 0.966996699669967, 'std': 1.0320524894832985, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 2.0, 'max': 3.0}, 'trtbps': {'count': 303.0, 'mean': 131.62376237623764, 'std': 17.5381428135171, 'min': 94.0, '25%': 120.0, '50%': 130.0, '75%': 140.0, 'max': 200.0}, 'chol': {'count': 303.0, 'mean': 246.26402640264027, 'std': 51.83075098793003, 'min': 126.0, '25%': 211.0, '50%': 240.0, '75%': 274.5, 'max': 564.0}, 'fbs': {'count': 303.0, 'mean': 0.1485148514851485, 'std': 0.35619787492797644, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 0.0, 'max': 1.0}, 'restecg': {'count': 303.0, 'mean': 0.528052805280528, 'std': 0.525859596359298, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 2.0}, 'thalachh': {'count': 303.0, 'mean': 149.64686468646866, 'std': 22.905161114914094, 'min': 71.0, '25%': 133.5, '50%': 153.0, '75%': 166.0, 'max': 202.0}, 'exng': {'count': 303.0, 'mean': 0.32673267326732675, 'std': 0.4697944645223165, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 1.0}, 'oldpeak': {'count': 303.0, 'mean': 1.0396039603960396, 'std': 1.1610750220686348, 'min': 0.0, '25%': 0.0, '50%': 0.8, '75%': 1.6, 'max': 6.2}, 'slp': {'count': 303.0, 'mean': 1.3993399339933994, 'std': 0.6162261453459619, 'min': 0.0, '25%': 1.0, '50%': 1.0, '75%': 2.0, 'max': 2.0}, 'caa': {'count': 303.0, 'mean': 0.7293729372937293, 'std': 1.022606364969327, 'min': 0.0, '25%': 0.0, '50%': 0.0, '75%': 1.0, 'max': 4.0}, 'thall': {'count': 303.0, 'mean': 2.3135313531353137, 'std': 0.6122765072781409, 'min': 0.0, '25%': 2.0, '50%': 2.0, '75%': 3.0, 'max': 3.0}, 'output': {'count': 303.0, 'mean': 0.5445544554455446, 'std': 0.4988347841643913, 'min': 0.0, '25%': 0.0, '50%': 1.0, '75%': 1.0, 'max': 1.0}} <dataframe_info> RangeIndex: 303 entries, 0 to 302 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 303 non-null int64 1 sex 303 non-null int64 2 cp 303 non-null int64 3 trtbps 303 non-null int64 4 chol 303 non-null int64 5 fbs 303 non-null int64 6 restecg 303 non-null int64 7 thalachh 303 non-null int64 8 exng 303 non-null int64 9 oldpeak 303 non-null float64 10 slp 303 non-null int64 11 caa 303 non-null int64 12 thall 303 non-null int64 13 output 303 non-null int64 dtypes: float64(1), int64(13) memory usage: 33.3 KB <some_examples> {'age': {'0': 63, '1': 37, '2': 41, '3': 56}, 'sex': {'0': 1, '1': 1, '2': 0, '3': 1}, 'cp': {'0': 3, '1': 2, '2': 1, '3': 1}, 'trtbps': {'0': 145, '1': 130, '2': 130, '3': 120}, 'chol': {'0': 233, '1': 250, '2': 204, '3': 236}, 'fbs': {'0': 1, '1': 0, '2': 0, '3': 0}, 'restecg': {'0': 0, '1': 1, '2': 0, '3': 1}, 'thalachh': {'0': 150, '1': 187, '2': 172, '3': 178}, 'exng': {'0': 0, '1': 0, '2': 0, '3': 0}, 'oldpeak': {'0': 2.3, '1': 3.5, '2': 1.4, '3': 0.8}, 'slp': {'0': 0, '1': 0, '2': 2, '3': 2}, 'caa': {'0': 0, '1': 0, '2': 0, '3': 0}, 'thall': {'0': 1, '1': 2, '2': 2, '3': 2}, 'output': {'0': 1, '1': 1, '2': 1, '3': 1}} <end_description>
808
1
2,503
808
129730665
<jupyter_start><jupyter_text>Twitter Sentiment Analysis # Twitter Sentiment Analysis Dataset ## Overview This is an entity-level sentiment analysis dataset of twitter. Given a message and an entity, the task is to judge the sentiment of the message about the entity. There are three classes in this dataset: Positive, Negative and Neutral. We regard messages that are not relevant to the entity (i.e. Irrelevant) as Neutral. ## Usage Please use `twitter_training.csv` as the training set and `twitter_validation.csv` as the validation set. Top 1 classification accuracy is used as the metric. Kaggle dataset identifier: twitter-entity-sentiment-analysis <jupyter_code>import pandas as pd df = pd.read_csv('twitter-entity-sentiment-analysis/twitter_validation.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 999 entries, 0 to 998 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 3364 999 non-null int64 1 Facebook 999 non-null object 2 Irrelevant 999 non-null object 3 I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom’s great auntie as ‘Hayley can’t get out of bed’ and told to his grandma, who now thinks I’m a lazy, terrible person 🤣 999 non-null object dtypes: int64(1), object(3) memory usage: 31.3+ KB <jupyter_text>Examples: { "3364": 352, "Facebook": "Amazon", "Irrelevant": "Neutral", "I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\u2019s great auntie as \u2018Hayley can\u2019t get out of bed\u2019 and told to his grandma, who now thinks I\u2019m a lazy, terrible person \ud83e\udd23": "BBC News - Am...(truncated)", } { "3364": 8312, "Facebook": "Microsoft", "Irrelevant": "Negative", "I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\u2019s great auntie as \u2018Hayley can\u2019t get out of bed\u2019 and told to his grandma, who now thinks I\u2019m a lazy, terrible person \ud83e\udd23": "@Microsoft Wh...(truncated)", } { "3364": 4371, "Facebook": "CS-GO", "Irrelevant": "Negative", "I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\u2019s great auntie as \u2018Hayley can\u2019t get out of bed\u2019 and told to his grandma, who now thinks I\u2019m a lazy, terrible person \ud83e\udd23": "CSGO matchmak...(truncated)", } { "3364": 4433, "Facebook": "Google", "Irrelevant": "Neutral", "I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\u2019s great auntie as \u2018Hayley can\u2019t get out of bed\u2019 and told to his grandma, who now thinks I\u2019m a lazy, terrible person \ud83e\udd23": "Now the Presi...(truncated)", } <jupyter_code>import pandas as pd df = pd.read_csv('twitter-entity-sentiment-analysis/twitter_training.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 74681 entries, 0 to 74680 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 2401 74681 non-null int64 1 Borderlands 74681 non-null object 2 Positive 74681 non-null object 3 im getting on borderlands and i will murder you all , 73995 non-null object dtypes: int64(1), object(3) memory usage: 2.3+ MB <jupyter_text>Examples: { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "I am coming to the borders and I will kill you all," } { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "im getting on borderlands and i will kill you all," } { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "im coming on borderlands and i will murder you all," } { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "im getting on borderlands 2 and i will murder you me all," } <jupyter_script># # Practical 4 # **Name : Alok Sinh R Chudasama** # **Roll No : 21BCE501** # **Subject : NLP** # import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session import numpy as np # linear algebra import pandas as pd # data processing pd.options.mode.chained_assignment = None import os # File location for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) from wordcloud import WordCloud # Word visualization import matplotlib.pyplot as plt # Plotting properties import seaborn as sns # Plotting properties from sklearn.feature_extraction.text import CountVectorizer # Data transformation from sklearn.model_selection import train_test_split # Data testing from sklearn.linear_model import LogisticRegression # Prediction Model from sklearn.metrics import accuracy_score # Comparison between real and predicted from xgboost import XGBClassifier from sklearn.preprocessing import ( LabelEncoder, ) # Variable encoding and decoding for XGBoost import re # Regular expressions import nltk from nltk import word_tokenize nltk.download("stopwords") # Validation dataset val = pd.read_csv( "/kaggle/input/twitter-entity-sentiment-analysis/twitter_validation.csv", header=None, ) # Full dataset for Train-Test train = pd.read_csv( "/kaggle/input/twitter-entity-sentiment-analysis/twitter_training.csv", header=None ) train.columns = ["id", "information", "type", "text"] train.head() val.columns = ["id", "information", "type", "text"] val.head() train_data = train train_data val_data = val val_data # Text transformation train_data["lower"] = train_data.text.str.lower() # lowercase train_data["lower"] = [ str(data) for data in train_data.lower ] # converting all to string train_data["lower"] = train_data.lower.apply( lambda x: re.sub("[^A-Za-z0-9 ]+", " ", x) ) # regex val_data["lower"] = val_data.text.str.lower() # lowercase val_data["lower"] = [str(data) for data in val_data.lower] # converting all to string val_data["lower"] = val_data.lower.apply( lambda x: re.sub("[^A-Za-z0-9 ]+", " ", x) ) # regex train_data.head() word_cloud_text = "".join(train_data[train_data["type"] == "Positive"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() word_cloud_text = "".join(train_data[train_data["type"] == "Negative"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() word_cloud_text = "".join(train_data[train_data["type"] == "Irrelevant"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() word_cloud_text = "".join(train_data[train_data["type"] == "Neutral"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # Count information per category plot1 = train.groupby(by=["information", "type"]).count().reset_index() plot1.head() # Figure of comparison per branch plt.figure(figsize=(20, 6)) sns.barplot(data=plot1, x="information", y="id", hue="type") plt.xticks(rotation=90) plt.xlabel("Brand") plt.ylabel("Number of tweets") plt.grid() plt.title("Distribution of tweets per Branch and Type") # Text splitting tokens_text = [word_tokenize(str(word)) for word in train_data.lower] # Unique word counter tokens_counter = [item for sublist in tokens_text for item in sublist] print("Number of tokens: ", len(set(tokens_counter))) tokens_text[1] # Choosing english stopwords stopwords_nltk = nltk.corpus.stopwords stop_words = stopwords_nltk.words("english") stop_words[:5] # Initial Bag of Words bow_counts = CountVectorizer( tokenizer=word_tokenize, stop_words=stop_words, # English Stopwords ngram_range=(1, 1), # analysis of one word ) # Train - Test splitting reviews_train, reviews_test = train_test_split( train_data, test_size=0.2, random_state=0 ) # Creation of encoding related to train dataset X_train_bow = bow_counts.fit_transform(reviews_train.lower) # Transformation of test dataset with train encoding X_test_bow = bow_counts.transform(reviews_test.lower) X_test_bow # Labels for train and test encoding y_train_bow = reviews_train["type"] y_test_bow = reviews_test["type"] # Total of registers per category y_test_bow.value_counts() / y_test_bow.shape[0] # Logistic regression model1 = LogisticRegression(C=1, solver="liblinear", max_iter=200) model1.fit(X_train_bow, y_train_bow) # Prediction test_pred = model1.predict(X_test_bow) print("Accuracy: ", accuracy_score(y_test_bow, test_pred) * 100) # Validation data X_val_bow = bow_counts.transform(val_data.lower) y_val_bow = val_data["type"] Val_res = model1.predict(X_val_bow) print("Accuracy: ", accuracy_score(y_val_bow, Val_res) * 100) # n-gram of 4 words bow_counts = CountVectorizer(tokenizer=word_tokenize, ngram_range=(1, 4)) # Data labeling X_train_bow = bow_counts.fit_transform(reviews_train.lower) X_test_bow = bow_counts.transform(reviews_test.lower) X_val_bow = bow_counts.transform(val_data.lower)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730665.ipynb
twitter-entity-sentiment-analysis
jp797498e
[{"Id": 129730665, "ScriptId": 36051553, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:54:47", "VersionNumber": 1.0, "Title": "NLP 4", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 211.0, "LinesInsertedFromPrevious": 211.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
[{"Id": 186074719, "KernelVersionId": 129730665, "SourceDatasetVersionId": 2510329}]
[{"Id": 2510329, "DatasetId": 1520310, "DatasourceVersionId": 2553073, "CreatorUserId": 8093471, "LicenseName": "CC0: Public Domain", "CreationDate": "08/09/2021 02:52:11", "VersionNumber": 2.0, "Title": "Twitter Sentiment Analysis", "Slug": "twitter-entity-sentiment-analysis", "Subtitle": "Entity-level sentiment analysis on multi-lingual tweets.", "Description": "# Twitter Sentiment Analysis Dataset\n\n## Overview\n\nThis is an entity-level sentiment analysis dataset of twitter. Given a message and an entity, the task is to judge the sentiment of the message about the entity. There are three classes in this dataset: Positive, Negative and Neutral. We regard messages that are not relevant to the entity (i.e. Irrelevant) as Neutral.\n\n## Usage\n\nPlease use `twitter_training.csv` as the training set and `twitter_validation.csv` as the validation set. Top 1 classification accuracy is used as the metric.", "VersionNotes": "Data Update 2021/08/09", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 1520310, "CreatorUserId": 8093471, "OwnerUserId": 8093471.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2510329.0, "CurrentDatasourceVersionId": 2553073.0, "ForumId": 1540112, "Type": 2, "CreationDate": "08/09/2021 02:05:48", "LastActivityDate": "08/09/2021", "TotalViews": 120881, "TotalDownloads": 18504, "TotalVotes": 145, "TotalKernels": 93}]
[{"Id": 8093471, "UserName": "jp797498e", "DisplayName": "passionate-nlp", "RegisterDate": "08/09/2021", "PerformanceTier": 0}]
# # Practical 4 # **Name : Alok Sinh R Chudasama** # **Roll No : 21BCE501** # **Subject : NLP** # import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session import numpy as np # linear algebra import pandas as pd # data processing pd.options.mode.chained_assignment = None import os # File location for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) from wordcloud import WordCloud # Word visualization import matplotlib.pyplot as plt # Plotting properties import seaborn as sns # Plotting properties from sklearn.feature_extraction.text import CountVectorizer # Data transformation from sklearn.model_selection import train_test_split # Data testing from sklearn.linear_model import LogisticRegression # Prediction Model from sklearn.metrics import accuracy_score # Comparison between real and predicted from xgboost import XGBClassifier from sklearn.preprocessing import ( LabelEncoder, ) # Variable encoding and decoding for XGBoost import re # Regular expressions import nltk from nltk import word_tokenize nltk.download("stopwords") # Validation dataset val = pd.read_csv( "/kaggle/input/twitter-entity-sentiment-analysis/twitter_validation.csv", header=None, ) # Full dataset for Train-Test train = pd.read_csv( "/kaggle/input/twitter-entity-sentiment-analysis/twitter_training.csv", header=None ) train.columns = ["id", "information", "type", "text"] train.head() val.columns = ["id", "information", "type", "text"] val.head() train_data = train train_data val_data = val val_data # Text transformation train_data["lower"] = train_data.text.str.lower() # lowercase train_data["lower"] = [ str(data) for data in train_data.lower ] # converting all to string train_data["lower"] = train_data.lower.apply( lambda x: re.sub("[^A-Za-z0-9 ]+", " ", x) ) # regex val_data["lower"] = val_data.text.str.lower() # lowercase val_data["lower"] = [str(data) for data in val_data.lower] # converting all to string val_data["lower"] = val_data.lower.apply( lambda x: re.sub("[^A-Za-z0-9 ]+", " ", x) ) # regex train_data.head() word_cloud_text = "".join(train_data[train_data["type"] == "Positive"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() word_cloud_text = "".join(train_data[train_data["type"] == "Negative"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() word_cloud_text = "".join(train_data[train_data["type"] == "Irrelevant"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() word_cloud_text = "".join(train_data[train_data["type"] == "Neutral"].lower) # Creation of wordcloud wordcloud = WordCloud( max_font_size=100, max_words=100, background_color="black", scale=10, width=800, height=800, ).generate(word_cloud_text) # Figure properties plt.figure(figsize=(10, 10)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # Count information per category plot1 = train.groupby(by=["information", "type"]).count().reset_index() plot1.head() # Figure of comparison per branch plt.figure(figsize=(20, 6)) sns.barplot(data=plot1, x="information", y="id", hue="type") plt.xticks(rotation=90) plt.xlabel("Brand") plt.ylabel("Number of tweets") plt.grid() plt.title("Distribution of tweets per Branch and Type") # Text splitting tokens_text = [word_tokenize(str(word)) for word in train_data.lower] # Unique word counter tokens_counter = [item for sublist in tokens_text for item in sublist] print("Number of tokens: ", len(set(tokens_counter))) tokens_text[1] # Choosing english stopwords stopwords_nltk = nltk.corpus.stopwords stop_words = stopwords_nltk.words("english") stop_words[:5] # Initial Bag of Words bow_counts = CountVectorizer( tokenizer=word_tokenize, stop_words=stop_words, # English Stopwords ngram_range=(1, 1), # analysis of one word ) # Train - Test splitting reviews_train, reviews_test = train_test_split( train_data, test_size=0.2, random_state=0 ) # Creation of encoding related to train dataset X_train_bow = bow_counts.fit_transform(reviews_train.lower) # Transformation of test dataset with train encoding X_test_bow = bow_counts.transform(reviews_test.lower) X_test_bow # Labels for train and test encoding y_train_bow = reviews_train["type"] y_test_bow = reviews_test["type"] # Total of registers per category y_test_bow.value_counts() / y_test_bow.shape[0] # Logistic regression model1 = LogisticRegression(C=1, solver="liblinear", max_iter=200) model1.fit(X_train_bow, y_train_bow) # Prediction test_pred = model1.predict(X_test_bow) print("Accuracy: ", accuracy_score(y_test_bow, test_pred) * 100) # Validation data X_val_bow = bow_counts.transform(val_data.lower) y_val_bow = val_data["type"] Val_res = model1.predict(X_val_bow) print("Accuracy: ", accuracy_score(y_val_bow, Val_res) * 100) # n-gram of 4 words bow_counts = CountVectorizer(tokenizer=word_tokenize, ngram_range=(1, 4)) # Data labeling X_train_bow = bow_counts.fit_transform(reviews_train.lower) X_test_bow = bow_counts.transform(reviews_test.lower) X_val_bow = bow_counts.transform(val_data.lower)
[{"twitter-entity-sentiment-analysis/twitter_validation.csv": {"column_names": "[\"3364\", \"Facebook\", \"Irrelevant\", \"I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\\u2019s great auntie as \\u2018Hayley can\\u2019t get out of bed\\u2019 and told to his grandma, who now thinks I\\u2019m a lazy, terrible person \\ud83e\\udd23\"]", "column_data_types": "{\"3364\": \"int64\", \"Facebook\": \"object\", \"Irrelevant\": \"object\", \"I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\\u2019s great auntie as \\u2018Hayley can\\u2019t get out of bed\\u2019 and told to his grandma, who now thinks I\\u2019m a lazy, terrible person \\ud83e\\udd23\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 999 entries, 0 to 998\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 3364 999 non-null int64 \n 1 Facebook 999 non-null object\n 2 Irrelevant 999 non-null object\n 3 I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\u2019s great auntie as \u2018Hayley can\u2019t get out of bed\u2019 and told to his grandma, who now thinks I\u2019m a lazy, terrible person \ud83e\udd23 999 non-null object\ndtypes: int64(1), object(3)\nmemory usage: 31.3+ KB\n", "summary": "{\"3364\": {\"count\": 999.0, \"mean\": 6435.15915915916, \"std\": 3728.9122259005558, \"min\": 6.0, \"25%\": 3241.5, \"50%\": 6560.0, \"75%\": 9662.5, \"max\": 13197.0}}", "examples": "{\"3364\":{\"0\":352,\"1\":8312,\"2\":4371,\"3\":4433},\"Facebook\":{\"0\":\"Amazon\",\"1\":\"Microsoft\",\"2\":\"CS-GO\",\"3\":\"Google\"},\"Irrelevant\":{\"0\":\"Neutral\",\"1\":\"Negative\",\"2\":\"Negative\",\"3\":\"Neutral\"},\"I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom\\u2019s great auntie as \\u2018Hayley can\\u2019t get out of bed\\u2019 and told to his grandma, who now thinks I\\u2019m a lazy, terrible person \\ud83e\\udd23\":{\"0\":\"BBC News - Amazon boss Jeff Bezos rejects claims company acted like a 'drug dealer' bbc.co.uk\\/news\\/av\\/busine\\u2026\",\"1\":\"@Microsoft Why do I pay for WORD when it functions so poorly on my @SamsungUS Chromebook? \\ud83d\\ude44\",\"2\":\"CSGO matchmaking is so full of closet hacking, it's a truly awful game.\",\"3\":\"Now the President is slapping Americans in the face that he really did commit an unlawful act after his acquittal! From Discover on Google vanityfair.com\\/news\\/2020\\/02\\/t\\u2026\"}}"}}, {"twitter-entity-sentiment-analysis/twitter_training.csv": {"column_names": "[\"2401\", \"Borderlands\", \"Positive\", \"im getting on borderlands and i will murder you all ,\"]", "column_data_types": "{\"2401\": \"int64\", \"Borderlands\": \"object\", \"Positive\": \"object\", \"im getting on borderlands and i will murder you all ,\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 74681 entries, 0 to 74680\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 2401 74681 non-null int64 \n 1 Borderlands 74681 non-null object\n 2 Positive 74681 non-null object\n 3 im getting on borderlands and i will murder you all , 73995 non-null object\ndtypes: int64(1), object(3)\nmemory usage: 2.3+ MB\n", "summary": "{\"2401\": {\"count\": 74681.0, \"mean\": 6432.6401494356, \"std\": 3740.423819299502, \"min\": 1.0, \"25%\": 3195.0, \"50%\": 6422.0, \"75%\": 9601.0, \"max\": 13200.0}}", "examples": "{\"2401\":{\"0\":2401,\"1\":2401,\"2\":2401,\"3\":2401},\"Borderlands\":{\"0\":\"Borderlands\",\"1\":\"Borderlands\",\"2\":\"Borderlands\",\"3\":\"Borderlands\"},\"Positive\":{\"0\":\"Positive\",\"1\":\"Positive\",\"2\":\"Positive\",\"3\":\"Positive\"},\"im getting on borderlands and i will murder you all ,\":{\"0\":\"I am coming to the borders and I will kill you all,\",\"1\":\"im getting on borderlands and i will kill you all,\",\"2\":\"im coming on borderlands and i will murder you all,\",\"3\":\"im getting on borderlands 2 and i will murder you me all,\"}}"}}]
true
2
<start_data_description><data_path>twitter-entity-sentiment-analysis/twitter_validation.csv: <column_names> ['3364', 'Facebook', 'Irrelevant', 'I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom’s great auntie as ‘Hayley can’t get out of bed’ and told to his grandma, who now thinks I’m a lazy, terrible person 🤣'] <column_types> {'3364': 'int64', 'Facebook': 'object', 'Irrelevant': 'object', 'I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom’s great auntie as ‘Hayley can’t get out of bed’ and told to his grandma, who now thinks I’m a lazy, terrible person 🤣': 'object'} <dataframe_Summary> {'3364': {'count': 999.0, 'mean': 6435.15915915916, 'std': 3728.9122259005558, 'min': 6.0, '25%': 3241.5, '50%': 6560.0, '75%': 9662.5, 'max': 13197.0}} <dataframe_info> RangeIndex: 999 entries, 0 to 998 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 3364 999 non-null int64 1 Facebook 999 non-null object 2 Irrelevant 999 non-null object 3 I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom’s great auntie as ‘Hayley can’t get out of bed’ and told to his grandma, who now thinks I’m a lazy, terrible person 🤣 999 non-null object dtypes: int64(1), object(3) memory usage: 31.3+ KB <some_examples> {'3364': {'0': 352, '1': 8312, '2': 4371, '3': 4433}, 'Facebook': {'0': 'Amazon', '1': 'Microsoft', '2': 'CS-GO', '3': 'Google'}, 'Irrelevant': {'0': 'Neutral', '1': 'Negative', '2': 'Negative', '3': 'Neutral'}, 'I mentioned on Facebook that I was struggling for motivation to go for a run the other day, which has been translated by Tom’s great auntie as ‘Hayley can’t get out of bed’ and told to his grandma, who now thinks I’m a lazy, terrible person 🤣': {'0': "BBC News - Amazon boss Jeff Bezos rejects claims company acted like a 'drug dealer' bbc.co.uk/news/av/busine…", '1': '@Microsoft Why do I pay for WORD when it functions so poorly on my @SamsungUS Chromebook? 🙄', '2': "CSGO matchmaking is so full of closet hacking, it's a truly awful game.", '3': 'Now the President is slapping Americans in the face that he really did commit an unlawful act after his acquittal! From Discover on Google vanityfair.com/news/2020/02/t…'}} <end_description> <start_data_description><data_path>twitter-entity-sentiment-analysis/twitter_training.csv: <column_names> ['2401', 'Borderlands', 'Positive', 'im getting on borderlands and i will murder you all ,'] <column_types> {'2401': 'int64', 'Borderlands': 'object', 'Positive': 'object', 'im getting on borderlands and i will murder you all ,': 'object'} <dataframe_Summary> {'2401': {'count': 74681.0, 'mean': 6432.6401494356, 'std': 3740.423819299502, 'min': 1.0, '25%': 3195.0, '50%': 6422.0, '75%': 9601.0, 'max': 13200.0}} <dataframe_info> RangeIndex: 74681 entries, 0 to 74680 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 2401 74681 non-null int64 1 Borderlands 74681 non-null object 2 Positive 74681 non-null object 3 im getting on borderlands and i will murder you all , 73995 non-null object dtypes: int64(1), object(3) memory usage: 2.3+ MB <some_examples> {'2401': {'0': 2401, '1': 2401, '2': 2401, '3': 2401}, 'Borderlands': {'0': 'Borderlands', '1': 'Borderlands', '2': 'Borderlands', '3': 'Borderlands'}, 'Positive': {'0': 'Positive', '1': 'Positive', '2': 'Positive', '3': 'Positive'}, 'im getting on borderlands and i will murder you all ,': {'0': 'I am coming to the borders and I will kill you all,', '1': 'im getting on borderlands and i will kill you all,', '2': 'im coming on borderlands and i will murder you all,', '3': 'im getting on borderlands 2 and i will murder you me all,'}} <end_description>
2,099
0
3,548
2,099
129730696
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) import nltk from gensim.models import Word2Vec from nltk.corpus import stopwords import re import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import matplotlib.pyplot as plt # ## Self Implementation import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import matplotlib.pyplot as plt dtype = torch.FloatTensor # 3 Words Sentence (to semplify) # All them form our text corpus sentences = [ "i like dog", "i like cat", "i like animal", "dog cat animal", "apple cat dog like", "dog fish milk like", "dog cat eyes like", "i like apple", "apple i hate", "apple i movie", "book music like", "cat dog hate", "cat dog like", ] # list all the words present in our corpus word_sequence = " ".join(sentences).split() # print(word_sequence ) # build the vocabulary word_list = list(set(word_sequence)) # print(word_list) word_dict = {w: i for i, w in enumerate(word_list)} # print(word_dict) # Word2Vec Parameter batch_size = 20 # To show 2 dim embedding graph embedding_size = 2 # To show 2 dim embedding graph voc_size = len(word_list) # input word j = 1 print("Input word : ") print(word_sequence[j], word_dict[word_sequence[j]]) # context words print("Context words : ") print(word_sequence[j - 1], word_sequence[j + 1]) print([word_dict[word_sequence[j - 1]], word_dict[word_sequence[j + 1]]]) skip_grams = [] for i in range(1, len(word_sequence) - 1): input = word_dict[word_sequence[i]] context = [word_dict[word_sequence[i - 1]], word_dict[word_sequence[i + 1]]] for w in context: skip_grams.append([input, w]) # lets plot some data skip_grams[:6] np.random.seed(172) def random_batch(data, size): random_inputs = [] random_labels = [] random_index = np.random.choice(range(len(data)), size, replace=False) for i in random_index: # one-hot encoding of words random_inputs.append(np.eye(voc_size)[data[i][0]]) # input random_labels.append(data[i][1]) # context word return random_inputs, random_labels random_batch(skip_grams[:6], size=3) # Model class Word2Vec(nn.Module): def __init__(self): super(Word2Vec, self).__init__() # parameters between -1 and + 1 self.W = nn.Parameter(-2 * torch.rand(voc_size, embedding_size) + 1).type( dtype ) # voc_size -> embedding_size Weight self.V = nn.Parameter(-2 * torch.rand(embedding_size, voc_size) + 1).type( dtype ) # embedding_size -> voc_size Weight def forward(self, X): hidden_layer = torch.matmul( X, self.W ) # hidden_layer : [batch_size, embedding_size] output_layer = torch.matmul( hidden_layer, self.V ) # output_layer : [batch_size, voc_size] # return output_layer return output_layer model = Word2Vec() # Set the model in train mode model.train() criterion = ( nn.CrossEntropyLoss() ) # Softmax (for multi-class classification problems) is already included optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in range(5000): input_batch, target_batch = random_batch(skip_grams, batch_size) # new_tensor(data, dtype=None, device=None, requires_grad=False) input_batch = torch.Tensor(input_batch) target_batch = torch.LongTensor(target_batch) optimizer.zero_grad() output = model(input_batch) # output : [batch_size, voc_size], target_batch : [batch_size] (LongTensor, not one-hot) loss = criterion(output, target_batch) if (epoch + 1) % 1000 == 0: print("Epoch:", "%04d" % (epoch + 1), "cost =", "{:.6f}".format(loss)) loss.backward() optimizer.step() W, _ = model.parameters() print(W.detach()) for i, word in enumerate(word_list): W, _ = model.parameters() W = W.detach() x, y = float(W[i][0]), float(W[i][1]) plt.scatter(x, y) plt.annotate( word, xy=(x, y), xytext=(5, 2), textcoords="offset points", ha="right", va="bottom", ) plt.show() # ## Using Gensim Library import nltk from gensim.models import Word2Vec from nltk.corpus import stopwords import re paragraph = """I have three visions for India. In 3000 years of our history, people from all over the world have come and invaded us, captured our lands, conquered our minds. From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British, the French, the Dutch, all of them came and looted us, took over what was ours. Yet we have not done this to any other nation. We have not conquered anyone. We have not grabbed their land, their culture, their history and tried to enforce our way of life on them. Why? Because we respect the freedom of others.That is why my first vision is that of freedom. I believe that India got its first vision of this in 1857, when we started the War of Independence. It is this freedom that we must protect and nurture and build on. If we are not free, no one will respect us. My second vision for India’s development. For fifty years we have been a developing nation. It is time we see ourselves as a developed nation. We are among the top 5 nations of the world in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling. Our achievements are being globally recognised today. Yet we lack the self-confidence to see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect? I have a third vision. India must stand up to the world. Because I believe that unless India stands up to the world, no one will respect us. Only strength respects strength. We must be strong not only as a military power but also as an economic power. Both must go hand-in-hand. My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material. I was lucky to have worked with all three of them closely and consider this the great opportunity of my life. I see four milestones in my career""" # Preprocessing the data text = re.sub(r"\[[0-9]*\]", " ", paragraph) text = re.sub(r"\s+", " ", text) text = text.lower() text = re.sub(r"\d", " ", text) text = re.sub(r"\s+", " ", text) # Preparing the dataset sentences = nltk.sent_tokenize(text) sentences = [nltk.word_tokenize(sentence) for sentence in sentences] for i in range(len(sentences)): sentences[i] = [ word for word in sentences[i] if word not in stopwords.words("english") ] # Training the Word2Vec model model = Word2Vec(sentences, min_count=1) # words = model.wv.vocab # Finding Word Vectors vector = model.wv["visions"] print(vector) # Most similar words similar = model.wv.most_similar("vikram") print(similar)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730696.ipynb
null
null
[{"Id": 129730696, "ScriptId": 38028418, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:55:15", "VersionNumber": 1.0, "Title": "NLP 8", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 203.0, "LinesInsertedFromPrevious": 203.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) import nltk from gensim.models import Word2Vec from nltk.corpus import stopwords import re import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import matplotlib.pyplot as plt # ## Self Implementation import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import matplotlib.pyplot as plt dtype = torch.FloatTensor # 3 Words Sentence (to semplify) # All them form our text corpus sentences = [ "i like dog", "i like cat", "i like animal", "dog cat animal", "apple cat dog like", "dog fish milk like", "dog cat eyes like", "i like apple", "apple i hate", "apple i movie", "book music like", "cat dog hate", "cat dog like", ] # list all the words present in our corpus word_sequence = " ".join(sentences).split() # print(word_sequence ) # build the vocabulary word_list = list(set(word_sequence)) # print(word_list) word_dict = {w: i for i, w in enumerate(word_list)} # print(word_dict) # Word2Vec Parameter batch_size = 20 # To show 2 dim embedding graph embedding_size = 2 # To show 2 dim embedding graph voc_size = len(word_list) # input word j = 1 print("Input word : ") print(word_sequence[j], word_dict[word_sequence[j]]) # context words print("Context words : ") print(word_sequence[j - 1], word_sequence[j + 1]) print([word_dict[word_sequence[j - 1]], word_dict[word_sequence[j + 1]]]) skip_grams = [] for i in range(1, len(word_sequence) - 1): input = word_dict[word_sequence[i]] context = [word_dict[word_sequence[i - 1]], word_dict[word_sequence[i + 1]]] for w in context: skip_grams.append([input, w]) # lets plot some data skip_grams[:6] np.random.seed(172) def random_batch(data, size): random_inputs = [] random_labels = [] random_index = np.random.choice(range(len(data)), size, replace=False) for i in random_index: # one-hot encoding of words random_inputs.append(np.eye(voc_size)[data[i][0]]) # input random_labels.append(data[i][1]) # context word return random_inputs, random_labels random_batch(skip_grams[:6], size=3) # Model class Word2Vec(nn.Module): def __init__(self): super(Word2Vec, self).__init__() # parameters between -1 and + 1 self.W = nn.Parameter(-2 * torch.rand(voc_size, embedding_size) + 1).type( dtype ) # voc_size -> embedding_size Weight self.V = nn.Parameter(-2 * torch.rand(embedding_size, voc_size) + 1).type( dtype ) # embedding_size -> voc_size Weight def forward(self, X): hidden_layer = torch.matmul( X, self.W ) # hidden_layer : [batch_size, embedding_size] output_layer = torch.matmul( hidden_layer, self.V ) # output_layer : [batch_size, voc_size] # return output_layer return output_layer model = Word2Vec() # Set the model in train mode model.train() criterion = ( nn.CrossEntropyLoss() ) # Softmax (for multi-class classification problems) is already included optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in range(5000): input_batch, target_batch = random_batch(skip_grams, batch_size) # new_tensor(data, dtype=None, device=None, requires_grad=False) input_batch = torch.Tensor(input_batch) target_batch = torch.LongTensor(target_batch) optimizer.zero_grad() output = model(input_batch) # output : [batch_size, voc_size], target_batch : [batch_size] (LongTensor, not one-hot) loss = criterion(output, target_batch) if (epoch + 1) % 1000 == 0: print("Epoch:", "%04d" % (epoch + 1), "cost =", "{:.6f}".format(loss)) loss.backward() optimizer.step() W, _ = model.parameters() print(W.detach()) for i, word in enumerate(word_list): W, _ = model.parameters() W = W.detach() x, y = float(W[i][0]), float(W[i][1]) plt.scatter(x, y) plt.annotate( word, xy=(x, y), xytext=(5, 2), textcoords="offset points", ha="right", va="bottom", ) plt.show() # ## Using Gensim Library import nltk from gensim.models import Word2Vec from nltk.corpus import stopwords import re paragraph = """I have three visions for India. In 3000 years of our history, people from all over the world have come and invaded us, captured our lands, conquered our minds. From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British, the French, the Dutch, all of them came and looted us, took over what was ours. Yet we have not done this to any other nation. We have not conquered anyone. We have not grabbed their land, their culture, their history and tried to enforce our way of life on them. Why? Because we respect the freedom of others.That is why my first vision is that of freedom. I believe that India got its first vision of this in 1857, when we started the War of Independence. It is this freedom that we must protect and nurture and build on. If we are not free, no one will respect us. My second vision for India’s development. For fifty years we have been a developing nation. It is time we see ourselves as a developed nation. We are among the top 5 nations of the world in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling. Our achievements are being globally recognised today. Yet we lack the self-confidence to see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect? I have a third vision. India must stand up to the world. Because I believe that unless India stands up to the world, no one will respect us. Only strength respects strength. We must be strong not only as a military power but also as an economic power. Both must go hand-in-hand. My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material. I was lucky to have worked with all three of them closely and consider this the great opportunity of my life. I see four milestones in my career""" # Preprocessing the data text = re.sub(r"\[[0-9]*\]", " ", paragraph) text = re.sub(r"\s+", " ", text) text = text.lower() text = re.sub(r"\d", " ", text) text = re.sub(r"\s+", " ", text) # Preparing the dataset sentences = nltk.sent_tokenize(text) sentences = [nltk.word_tokenize(sentence) for sentence in sentences] for i in range(len(sentences)): sentences[i] = [ word for word in sentences[i] if word not in stopwords.words("english") ] # Training the Word2Vec model model = Word2Vec(sentences, min_count=1) # words = model.wv.vocab # Finding Word Vectors vector = model.wv["visions"] print(vector) # Most similar words similar = model.wv.most_similar("vikram") print(similar)
false
0
2,121
0
2,121
2,121
129730681
<jupyter_start><jupyter_text>Twitter Sentiment Analysis # Twitter Sentiment Analysis Dataset ## Overview This is an entity-level sentiment analysis dataset of twitter. Given a message and an entity, the task is to judge the sentiment of the message about the entity. There are three classes in this dataset: Positive, Negative and Neutral. We regard messages that are not relevant to the entity (i.e. Irrelevant) as Neutral. ## Usage Please use `twitter_training.csv` as the training set and `twitter_validation.csv` as the validation set. Top 1 classification accuracy is used as the metric. Kaggle dataset identifier: twitter-entity-sentiment-analysis <jupyter_code>import pandas as pd df = pd.read_csv('twitter-entity-sentiment-analysis/twitter_training.csv') df.info() <jupyter_output><class 'pandas.core.frame.DataFrame'> RangeIndex: 74681 entries, 0 to 74680 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 2401 74681 non-null int64 1 Borderlands 74681 non-null object 2 Positive 74681 non-null object 3 im getting on borderlands and i will murder you all , 73995 non-null object dtypes: int64(1), object(3) memory usage: 2.3+ MB <jupyter_text>Examples: { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "I am coming to the borders and I will kill you all," } { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "im getting on borderlands and i will kill you all," } { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "im coming on borderlands and i will murder you all," } { "2401": 2401, "Borderlands": "Borderlands", "Positive": "Positive", "im getting on borderlands and i will murder you all ,": "im getting on borderlands 2 and i will murder you me all," } <jupyter_script>import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) df = pd.read_csv("/kaggle/input/twitter-entity-sentiment-analysis/twitter_training.csv") df.columns = ["Tag", "Game", "sentiment", "tweet"] df["sentiment"] = df["sentiment"].map(lambda x: 1 if x == "positive" else 0) df import gensim from nltk.tokenize import word_tokenize from tqdm import tqdm from nltk.tokenize import sent_tokenize, word_tokenize import gensim from gensim.models import Word2Vec df = df.dropna() corpus_text = "n".join(df[:1000]["tweet"]) data = [] # iterate through each sentence in the file for i in sent_tokenize(corpus_text): temp = [] # tokenize the sentence into words for j in word_tokenize(i): temp.append(j.lower()) data.append(temp) model1 = gensim.models.Word2Vec(data, min_count=1, vector_size=100, window=5, sg=0) word1 = "borderlands" word2 = "coming" print( "Cosine similarity between", word1, "and", word2, "- CBOW:", model1.wv.similarity(word1, word2), )
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/730/129730681.ipynb
twitter-entity-sentiment-analysis
jp797498e
[{"Id": 129730681, "ScriptId": 37442371, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13404689, "CreationDate": "05/16/2023 04:55:01", "VersionNumber": 1.0, "Title": "NLP 6", "EvaluationDate": "05/16/2023", "IsChange": true, "TotalLines": 39.0, "LinesInsertedFromPrevious": 39.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
[{"Id": 186074729, "KernelVersionId": 129730681, "SourceDatasetVersionId": 2510329}]
[{"Id": 2510329, "DatasetId": 1520310, "DatasourceVersionId": 2553073, "CreatorUserId": 8093471, "LicenseName": "CC0: Public Domain", "CreationDate": "08/09/2021 02:52:11", "VersionNumber": 2.0, "Title": "Twitter Sentiment Analysis", "Slug": "twitter-entity-sentiment-analysis", "Subtitle": "Entity-level sentiment analysis on multi-lingual tweets.", "Description": "# Twitter Sentiment Analysis Dataset\n\n## Overview\n\nThis is an entity-level sentiment analysis dataset of twitter. Given a message and an entity, the task is to judge the sentiment of the message about the entity. There are three classes in this dataset: Positive, Negative and Neutral. We regard messages that are not relevant to the entity (i.e. Irrelevant) as Neutral.\n\n## Usage\n\nPlease use `twitter_training.csv` as the training set and `twitter_validation.csv` as the validation set. Top 1 classification accuracy is used as the metric.", "VersionNotes": "Data Update 2021/08/09", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 1520310, "CreatorUserId": 8093471, "OwnerUserId": 8093471.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2510329.0, "CurrentDatasourceVersionId": 2553073.0, "ForumId": 1540112, "Type": 2, "CreationDate": "08/09/2021 02:05:48", "LastActivityDate": "08/09/2021", "TotalViews": 120881, "TotalDownloads": 18504, "TotalVotes": 145, "TotalKernels": 93}]
[{"Id": 8093471, "UserName": "jp797498e", "DisplayName": "passionate-nlp", "RegisterDate": "08/09/2021", "PerformanceTier": 0}]
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) df = pd.read_csv("/kaggle/input/twitter-entity-sentiment-analysis/twitter_training.csv") df.columns = ["Tag", "Game", "sentiment", "tweet"] df["sentiment"] = df["sentiment"].map(lambda x: 1 if x == "positive" else 0) df import gensim from nltk.tokenize import word_tokenize from tqdm import tqdm from nltk.tokenize import sent_tokenize, word_tokenize import gensim from gensim.models import Word2Vec df = df.dropna() corpus_text = "n".join(df[:1000]["tweet"]) data = [] # iterate through each sentence in the file for i in sent_tokenize(corpus_text): temp = [] # tokenize the sentence into words for j in word_tokenize(i): temp.append(j.lower()) data.append(temp) model1 = gensim.models.Word2Vec(data, min_count=1, vector_size=100, window=5, sg=0) word1 = "borderlands" word2 = "coming" print( "Cosine similarity between", word1, "and", word2, "- CBOW:", model1.wv.similarity(word1, word2), )
[{"twitter-entity-sentiment-analysis/twitter_training.csv": {"column_names": "[\"2401\", \"Borderlands\", \"Positive\", \"im getting on borderlands and i will murder you all ,\"]", "column_data_types": "{\"2401\": \"int64\", \"Borderlands\": \"object\", \"Positive\": \"object\", \"im getting on borderlands and i will murder you all ,\": \"object\"}", "info": "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 74681 entries, 0 to 74680\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 2401 74681 non-null int64 \n 1 Borderlands 74681 non-null object\n 2 Positive 74681 non-null object\n 3 im getting on borderlands and i will murder you all , 73995 non-null object\ndtypes: int64(1), object(3)\nmemory usage: 2.3+ MB\n", "summary": "{\"2401\": {\"count\": 74681.0, \"mean\": 6432.6401494356, \"std\": 3740.423819299502, \"min\": 1.0, \"25%\": 3195.0, \"50%\": 6422.0, \"75%\": 9601.0, \"max\": 13200.0}}", "examples": "{\"2401\":{\"0\":2401,\"1\":2401,\"2\":2401,\"3\":2401},\"Borderlands\":{\"0\":\"Borderlands\",\"1\":\"Borderlands\",\"2\":\"Borderlands\",\"3\":\"Borderlands\"},\"Positive\":{\"0\":\"Positive\",\"1\":\"Positive\",\"2\":\"Positive\",\"3\":\"Positive\"},\"im getting on borderlands and i will murder you all ,\":{\"0\":\"I am coming to the borders and I will kill you all,\",\"1\":\"im getting on borderlands and i will kill you all,\",\"2\":\"im coming on borderlands and i will murder you all,\",\"3\":\"im getting on borderlands 2 and i will murder you me all,\"}}"}}]
true
1
<start_data_description><data_path>twitter-entity-sentiment-analysis/twitter_training.csv: <column_names> ['2401', 'Borderlands', 'Positive', 'im getting on borderlands and i will murder you all ,'] <column_types> {'2401': 'int64', 'Borderlands': 'object', 'Positive': 'object', 'im getting on borderlands and i will murder you all ,': 'object'} <dataframe_Summary> {'2401': {'count': 74681.0, 'mean': 6432.6401494356, 'std': 3740.423819299502, 'min': 1.0, '25%': 3195.0, '50%': 6422.0, '75%': 9601.0, 'max': 13200.0}} <dataframe_info> RangeIndex: 74681 entries, 0 to 74680 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 2401 74681 non-null int64 1 Borderlands 74681 non-null object 2 Positive 74681 non-null object 3 im getting on borderlands and i will murder you all , 73995 non-null object dtypes: int64(1), object(3) memory usage: 2.3+ MB <some_examples> {'2401': {'0': 2401, '1': 2401, '2': 2401, '3': 2401}, 'Borderlands': {'0': 'Borderlands', '1': 'Borderlands', '2': 'Borderlands', '3': 'Borderlands'}, 'Positive': {'0': 'Positive', '1': 'Positive', '2': 'Positive', '3': 'Positive'}, 'im getting on borderlands and i will murder you all ,': {'0': 'I am coming to the borders and I will kill you all,', '1': 'im getting on borderlands and i will kill you all,', '2': 'im coming on borderlands and i will murder you all,', '3': 'im getting on borderlands 2 and i will murder you me all,'}} <end_description>
357
0
974
357
129377069
import pandas as pd import numpy as np # Load a dataset into a Pandas DataFrame train_proteins = pd.read_csv( "/kaggle/input/amp-parkinsons-disease-progression-prediction/train_proteins.csv" ) train_peptides = pd.read_csv( "/kaggle/input/amp-parkinsons-disease-progression-prediction/train_peptides.csv" ) train_clinical = pd.read_csv( "/kaggle/input/amp-parkinsons-disease-progression-prediction/train_clinical_data.csv" ) train_proteins.shape train_proteins.head() pivoted_train_proteins = train_proteins.pivot( index="visit_id", columns="UniProt", values="NPX" ) pivoted_train_proteins.shape print(pivoted_train_proteins) nan_values = pivoted_train_proteins[pivoted_train_proteins.isna().any(axis=1)] print(nan_values) pivoted_train_proteins.head() train_clinical.shape train_clinical.head() train_clinical.describe() print(train_clinical.loc[0]["visit_id"])
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/377/129377069.ipynb
null
null
[{"Id": 129377069, "ScriptId": 38443859, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14946652, "CreationDate": "05/13/2023 09:12:00", "VersionNumber": 3.0, "Title": "GMnb-AMP-Parkinson-Progression-Competition", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 35.0, "LinesInsertedFromPrevious": 18.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 17.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import pandas as pd import numpy as np # Load a dataset into a Pandas DataFrame train_proteins = pd.read_csv( "/kaggle/input/amp-parkinsons-disease-progression-prediction/train_proteins.csv" ) train_peptides = pd.read_csv( "/kaggle/input/amp-parkinsons-disease-progression-prediction/train_peptides.csv" ) train_clinical = pd.read_csv( "/kaggle/input/amp-parkinsons-disease-progression-prediction/train_clinical_data.csv" ) train_proteins.shape train_proteins.head() pivoted_train_proteins = train_proteins.pivot( index="visit_id", columns="UniProt", values="NPX" ) pivoted_train_proteins.shape print(pivoted_train_proteins) nan_values = pivoted_train_proteins[pivoted_train_proteins.isna().any(axis=1)] print(nan_values) pivoted_train_proteins.head() train_clinical.shape train_clinical.head() train_clinical.describe() print(train_clinical.loc[0]["visit_id"])
false
0
311
0
311
311
129377506
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt # data visualization import seaborn as sns # statistical data visualization # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session data = "/kaggle/input/titanic/train.csv" df = pd.read_csv(data, header=0) df.shape # (Row Count - Column Count) df.head() # (Show 5 Row In DATAFRAME) col_names = [ "PassengerId", "Survived", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch", "Ticket", "Fare", "Cabin", "Embarked", ] df.columns = col_names df["Survived"].value_counts() x = df.drop(["Survived"], axis=1) x = df[["Age", "Sex", "Pclass"]] y = df["Survived"] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( x, y, test_size=0.33, random_state=42 ) # Scikit-learn kütüphanesindeki model seçimi ve parametre ayarlama işlemleri için kullanılan bir modüldür. # Bu modül, veri kümesini eğitim ve test kümelerine ayırmak, çapraz doğrulama yapmak, hiperparametre ayarlamak, # model seçimini gerçekleştirmek gibi birçok işlemi gerçekleştirmek için fonksiyonlar içerir. X_train.shape, X_test.shape X_train.dtypes import category_encoders as ce encoder = ce.OrdinalEncoder(cols=["Age", "Sex", "Pclass"]) # category_encoders, kategorik değişkenleri sayısal değerlere dönüştürmek için kullanılan bir Python kütüphanesidir. # Bu dönüştürme işlemi, makine öğrenimi modellerinin kategorik verilerle çalışabilmesini sağlar. # category_encoders, label encoding, one-hot encoding, target encoding, binary encoding gibi farklı kodlama teknikleri sunar. # Bu kodlama teknikleri arasında seçim yapmak, veri setinin yapısına ve analiz amacına bağlıdır. X_train = encoder.fit_transform(X_train) X_test = encoder.transform(X_test) # Veri setindeki kategorik değişkenler için encoder'ın "fit" metodu çağrılır. Bu işlem, veri setindeki kategorik değişkenlerin sınıflarını # ve her sınıf için bir sayısal değer atanacak şekilde bir haritalama tablosu oluşturur. # Oluşturulan haritalama tablosu kullanılarak, veri setindeki kategorik değişkenler sayısal değerlere dönüştürülür. from sklearn.tree import DecisionTreeClassifier clf_en = DecisionTreeClassifier(criterion="entropy", max_depth=3, random_state=1) clf_en.fit(X_train, y_train) # from sklearn.tree import DecisionTreeClassifier ifadesi, scikit-learn kütüphanesi içindeki # DecisionTreeClassifier sınıfını kullanmak için gerekli olan Python kod satırıdır. # Bu sınıflandırıcının criterion parametresi, bölünme kriterini belirler. Burada 'entropy' değeri kullanıldığına göre, # sınıflar arasındaki homojenliği en çok azaltacak olan bölünme kriteri seçilir. # max_depth parametresi, karar ağacının en fazla kaç seviyeye kadar derinleştirilebileceğini belirler. # Bu, ağacın overfitting yapmasını engellemeye yardımcı olabilir. # random_state parametresi, modelin her çalıştırıldığında aynı sonuçları vermesini sağlayan bir rastgele sayı üreteci belirler. # Bu, modeli tekrar çalıştırdığımızda sonuçların tutarlı olmasını sağlar. clf_en.fit(X_train, y_train) # fit() metodu, sınıflandırıcı modelini eğitmek için kullanılır. # Bu işlem, veri setindeki özellikleri ve hedef değişkeni modelin öğrenmesi için kullanarak, sınıflandırıcının bir karar ağacı oluşturmasını sağlar. y_pred_en = clf_en.predict(X_test) # predict metodu, eğitilen bir karar ağacı sınıflandırıcısını kullanarak, verilen girdi özelliklerine dayanarak sınıflandırma tahmini yapar. from sklearn.metrics import accuracy_score print( "Model accuracy score with criterion entropy: {0:0.4f}".format( accuracy_score(y_test, y_pred_en) ) ) # accuracy_score bir sınıflandırıcının tahminlerinin gerçek sınıf etiketleriyle karşılaştırıldığı zaman doğruluk oranını hesaplar. # Gerçek sınıf etiketleri ve sınıflandırıcının tahmin ettiği sınıf etiketleri verilir ve bu fonksiyon gerçek sınıf etiketleriyle # tahmin edilen sınıf etiketlerinin eşleşme oranını hesaplar. Tahminlerin ne kadar doğru olduğunu anlamak için kullanılır. y_pred_train_en = clf_en.predict(X_train) y_pred_train_en print( "Training-set accuracy score: {0:0.4f}".format( accuracy_score(y_train, y_pred_train_en) ) ) # print the scores on training and test set print("Training set score: {:.4f}".format(clf_en.score(X_train, y_train))) print("Test set score: {:.4f}".format(clf_en.score(X_test, y_test))) plt.figure(figsize=(12, 8)) from sklearn import tree tree.plot_tree(clf_en.fit(X_train, y_train)) # plot_tree bir karar ağacının görselleştirilmesini sağlayan bir fonksiyondur. tree modülü, karar ağaçları oluşturmak için kullanılan bir # dizi sınıflandırıcı algoritmasını içerir. plot_tree fonksiyonu, bir karar ağacının görsel sunumunu oluşturur ve ağacın düğümlerindeki öznitelikleri, # bölünme kriterlerini ve dallanmaları görselleştirir. Bu, karar ağacının nasıl çalıştığını daha iyi anlamak için yararlıdır. import graphviz dot_data = tree.export_graphviz( clf_en, out_file=None, feature_names=X_train.columns, class_names=["Died", "Survived"], label="all", filled=True, rounded=True, special_characters=True, ) graph = graphviz.Source(dot_data) graph print( "Age : ", X_train.iloc[260].Age, " - Pclass :", X_train.iloc[260].Pclass, " - Sex :", X_train.iloc[260].Sex, " - Survived :", y_train.iloc[260], ) print( "Age : ", X_train.iloc[265].Age, " - Pclass :", X_train.iloc[265].Pclass, " - Sex :", X_train.iloc[265].Sex, " - Survived :", y_train.iloc[265], )
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/377/129377506.ipynb
null
null
[{"Id": 129377506, "ScriptId": 38193305, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 5172545, "CreationDate": "05/13/2023 09:16:49", "VersionNumber": 4.0, "Title": "Decision Tree - Mehmet U\u011fur Atmaca", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 135.0, "LinesInsertedFromPrevious": 45.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 90.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt # data visualization import seaborn as sns # statistical data visualization # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session data = "/kaggle/input/titanic/train.csv" df = pd.read_csv(data, header=0) df.shape # (Row Count - Column Count) df.head() # (Show 5 Row In DATAFRAME) col_names = [ "PassengerId", "Survived", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch", "Ticket", "Fare", "Cabin", "Embarked", ] df.columns = col_names df["Survived"].value_counts() x = df.drop(["Survived"], axis=1) x = df[["Age", "Sex", "Pclass"]] y = df["Survived"] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( x, y, test_size=0.33, random_state=42 ) # Scikit-learn kütüphanesindeki model seçimi ve parametre ayarlama işlemleri için kullanılan bir modüldür. # Bu modül, veri kümesini eğitim ve test kümelerine ayırmak, çapraz doğrulama yapmak, hiperparametre ayarlamak, # model seçimini gerçekleştirmek gibi birçok işlemi gerçekleştirmek için fonksiyonlar içerir. X_train.shape, X_test.shape X_train.dtypes import category_encoders as ce encoder = ce.OrdinalEncoder(cols=["Age", "Sex", "Pclass"]) # category_encoders, kategorik değişkenleri sayısal değerlere dönüştürmek için kullanılan bir Python kütüphanesidir. # Bu dönüştürme işlemi, makine öğrenimi modellerinin kategorik verilerle çalışabilmesini sağlar. # category_encoders, label encoding, one-hot encoding, target encoding, binary encoding gibi farklı kodlama teknikleri sunar. # Bu kodlama teknikleri arasında seçim yapmak, veri setinin yapısına ve analiz amacına bağlıdır. X_train = encoder.fit_transform(X_train) X_test = encoder.transform(X_test) # Veri setindeki kategorik değişkenler için encoder'ın "fit" metodu çağrılır. Bu işlem, veri setindeki kategorik değişkenlerin sınıflarını # ve her sınıf için bir sayısal değer atanacak şekilde bir haritalama tablosu oluşturur. # Oluşturulan haritalama tablosu kullanılarak, veri setindeki kategorik değişkenler sayısal değerlere dönüştürülür. from sklearn.tree import DecisionTreeClassifier clf_en = DecisionTreeClassifier(criterion="entropy", max_depth=3, random_state=1) clf_en.fit(X_train, y_train) # from sklearn.tree import DecisionTreeClassifier ifadesi, scikit-learn kütüphanesi içindeki # DecisionTreeClassifier sınıfını kullanmak için gerekli olan Python kod satırıdır. # Bu sınıflandırıcının criterion parametresi, bölünme kriterini belirler. Burada 'entropy' değeri kullanıldığına göre, # sınıflar arasındaki homojenliği en çok azaltacak olan bölünme kriteri seçilir. # max_depth parametresi, karar ağacının en fazla kaç seviyeye kadar derinleştirilebileceğini belirler. # Bu, ağacın overfitting yapmasını engellemeye yardımcı olabilir. # random_state parametresi, modelin her çalıştırıldığında aynı sonuçları vermesini sağlayan bir rastgele sayı üreteci belirler. # Bu, modeli tekrar çalıştırdığımızda sonuçların tutarlı olmasını sağlar. clf_en.fit(X_train, y_train) # fit() metodu, sınıflandırıcı modelini eğitmek için kullanılır. # Bu işlem, veri setindeki özellikleri ve hedef değişkeni modelin öğrenmesi için kullanarak, sınıflandırıcının bir karar ağacı oluşturmasını sağlar. y_pred_en = clf_en.predict(X_test) # predict metodu, eğitilen bir karar ağacı sınıflandırıcısını kullanarak, verilen girdi özelliklerine dayanarak sınıflandırma tahmini yapar. from sklearn.metrics import accuracy_score print( "Model accuracy score with criterion entropy: {0:0.4f}".format( accuracy_score(y_test, y_pred_en) ) ) # accuracy_score bir sınıflandırıcının tahminlerinin gerçek sınıf etiketleriyle karşılaştırıldığı zaman doğruluk oranını hesaplar. # Gerçek sınıf etiketleri ve sınıflandırıcının tahmin ettiği sınıf etiketleri verilir ve bu fonksiyon gerçek sınıf etiketleriyle # tahmin edilen sınıf etiketlerinin eşleşme oranını hesaplar. Tahminlerin ne kadar doğru olduğunu anlamak için kullanılır. y_pred_train_en = clf_en.predict(X_train) y_pred_train_en print( "Training-set accuracy score: {0:0.4f}".format( accuracy_score(y_train, y_pred_train_en) ) ) # print the scores on training and test set print("Training set score: {:.4f}".format(clf_en.score(X_train, y_train))) print("Test set score: {:.4f}".format(clf_en.score(X_test, y_test))) plt.figure(figsize=(12, 8)) from sklearn import tree tree.plot_tree(clf_en.fit(X_train, y_train)) # plot_tree bir karar ağacının görselleştirilmesini sağlayan bir fonksiyondur. tree modülü, karar ağaçları oluşturmak için kullanılan bir # dizi sınıflandırıcı algoritmasını içerir. plot_tree fonksiyonu, bir karar ağacının görsel sunumunu oluşturur ve ağacın düğümlerindeki öznitelikleri, # bölünme kriterlerini ve dallanmaları görselleştirir. Bu, karar ağacının nasıl çalıştığını daha iyi anlamak için yararlıdır. import graphviz dot_data = tree.export_graphviz( clf_en, out_file=None, feature_names=X_train.columns, class_names=["Died", "Survived"], label="all", filled=True, rounded=True, special_characters=True, ) graph = graphviz.Source(dot_data) graph print( "Age : ", X_train.iloc[260].Age, " - Pclass :", X_train.iloc[260].Pclass, " - Sex :", X_train.iloc[260].Sex, " - Survived :", y_train.iloc[260], ) print( "Age : ", X_train.iloc[265].Age, " - Pclass :", X_train.iloc[265].Pclass, " - Sex :", X_train.iloc[265].Sex, " - Survived :", y_train.iloc[265], )
false
0
2,152
0
2,152
2,152
129377923
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # # # IMPORT # # train = pd.read_csv( "/kaggle/input/house-prices-advanced-regression-techniques/train.csv" ) test = pd.read_csv("/kaggle/input/house-prices-advanced-regression-techniques/test.csv") actual = pd.read_csv( "/kaggle/input/house-prices-advanced-regression-techniques/sample_submission.csv" ) y = train.SalePrice combine = [train, test] # # # EDA # # print(train[train.notna().sum().sort_values().index].info()) train.columns[train.isnull().any()].tolist() print(test[test.notna().sum().sort_values().index].info()) test.columns[test.isnull().any()].tolist() # check the columns with lots of nan, can we drop them import matplotlib.pyplot as plt import seaborn as sns sns.boxplot(x="SalePrice", data=train_set) plt.show() plt.clf() sns.displot(train_set.SalePrice) plt.show() # # # DATA WRANGLING & TIDYING # # # DATA TRANSFORMATION # # log_y = np.log(train_set.SalePrice) sns.displot(log_y) plt.show()
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/377/129377923.ipynb
null
null
[{"Id": 129377923, "ScriptId": 38442203, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13731473, "CreationDate": "05/13/2023 09:20:56", "VersionNumber": 1.0, "Title": "House-Prices_solution", "EvaluationDate": "05/13/2023", "IsChange": true, "TotalLines": 72.0, "LinesInsertedFromPrevious": 72.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
import numpy as np import pandas as pd import os for dirname, _, filenames in os.walk("/kaggle/input"): for filename in filenames: print(os.path.join(dirname, filename)) # # # IMPORT # # train = pd.read_csv( "/kaggle/input/house-prices-advanced-regression-techniques/train.csv" ) test = pd.read_csv("/kaggle/input/house-prices-advanced-regression-techniques/test.csv") actual = pd.read_csv( "/kaggle/input/house-prices-advanced-regression-techniques/sample_submission.csv" ) y = train.SalePrice combine = [train, test] # # # EDA # # print(train[train.notna().sum().sort_values().index].info()) train.columns[train.isnull().any()].tolist() print(test[test.notna().sum().sort_values().index].info()) test.columns[test.isnull().any()].tolist() # check the columns with lots of nan, can we drop them import matplotlib.pyplot as plt import seaborn as sns sns.boxplot(x="SalePrice", data=train_set) plt.show() plt.clf() sns.displot(train_set.SalePrice) plt.show() # # # DATA WRANGLING & TIDYING # # # DATA TRANSFORMATION # # log_y = np.log(train_set.SalePrice) sns.displot(log_y) plt.show()
false
0
386
0
386
386
129300717
<jupyter_start><jupyter_text>semeion Kaggle dataset identifier: semeion <jupyter_script># The **Semion Dataset** was chosen for this task since it contains 266 features. # The Semeion dataset is a well-known dataset in the field of pattern recognition and machine learning. It is often used for image classification tasks and handwritten digit recognition. The dataset was created by the Semeion Research Center, an Italian research institute, and was released in 1991. # The Semeion dataset consists of handwritten digits from 0 to 9, written by approximately 80 different individuals. Each digit is represented as a 16x16 grayscale image, resulting in a total of 256 pixels per image. These images have been pre-processed and digitized into a binary format. # The other 10 attributes show which digit each row belongs to (from 0 to 9). # # Importing necessary libraries import numpy as np import pandas as pd from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split from sklearn import svm from sklearn.metrics import confusion_matrix from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn import metrics from matplotlib import pyplot as plt import time semeion = pd.read_csv("../input/semeion/semeion.data", delimiter=" ", header=None) semeion.head() # We oberve that column 266 contains NaN values, therefore we drop it semeion.drop(266, axis=1, inplace=True) semeion.head() # Now, we want to separate label information from the actual data. We see that labelling starts from index 256 and whichever digit the row data belongs to indicated by 1, others being set to 0. We create a label map regarding that information and add them into new column named **label** label_map = { 256: "0", 257: "1", 258: "2", 259: "3", 260: "4", 261: "5", 262: "6", 263: "7", 264: "8", 265: "9", } semeion["label"] = semeion.apply(lambda row: label_map[row[256:266].idxmax()], axis=1) semeion # Then, we separate our data into X values (X_input) and Y values (Y_label) X_input = semeion.iloc[:, 0:256] Y_label = semeion.iloc[:, -1] print(len(X_input)) print(len(Y_label)) # From this point, the dataset will be tested on 3 classifiers: # * **SVM** # * **KNN** # * **Logistic Regression** # All these three classifiers will be tested with PCA with several number of components values with a step of 10, starting with value 10. # # SVM svm_time_values = [] svm_pca_values = [] svm_acc_values = [] x_train, x_test, y_train, y_test = train_test_split( X_input, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() svm_classifier = svm.SVC() svm_classifier.fit(x_train, y_train) pred = svm_classifier.predict(x_test) no_pca_acc = metrics.accuracy_score(y_test, pred) * 100 no_pca_finish_time = time.time() - start_time print("None PCA accuracy: %.2f" % (no_pca_acc), "%") print("Elapsed time: ", no_pca_finish_time) pca_start = 10 for pca_val in range(pca_start, len(X_input.columns), len(X_input.columns) // 10): svm_pca_values.append(pca_val) X_input_copy = X_input.copy() pca = PCA(n_components=pca_val, random_state=42) pca.fit(X_input_copy) X_pca = pca.transform(X_input_copy) x_train, x_test, y_train, y_test = train_test_split( X_pca, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() svm_classifier = svm.SVC() svm_classifier.fit(x_train, y_train) pred = svm_classifier.predict(x_test) pca_acc = metrics.accuracy_score(y_test, pred) * 100 pca_finish_time = time.time() - start_time svm_acc_values.append(pca_acc) svm_time_values.append(pca_finish_time) svm_time_values.append(no_pca_finish_time) svm_pca_values.append(len(X_input.columns)) svm_acc_values.append(no_pca_acc) plt.bar(svm_pca_values, svm_acc_values) plt.ylim([80, 100]) plt.ylabel("SVM model accuracy") plt.xlabel("PCA N=") plt.plot(svm_pca_values, svm_time_values) plt.ylabel("SVM model time complexity") plt.xlabel("PCA N=") # With SVM, it is observed that as the number of PCA components increases, time complexity also goes up. The last value is without using any PCA. # Regarding the accuracy, it increases up to second PCA step (n=35) and after that, the accuracy value fluctuates between 95% and 97.5%. # # KNN knn_time_values = [] knn_pca_values = [] knn_acc_values = [] x_train, x_test, y_train, y_test = train_test_split( X_input, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() knn_classifier = KNeighborsClassifier(n_neighbors=15) knn_classifier.fit(x_train, y_train) pred = knn_classifier.predict(x_test) no_pca_acc = metrics.accuracy_score(y_test, pred) * 100 no_pca_finish_time = time.time() - start_time print("None PCA accuracy: %.2f" % (no_pca_acc), "%") print("Elapsed time: ", no_pca_finish_time) pca_start = 10 for pca_val in range(pca_start, len(X_input.columns), len(X_input.columns) // 10): knn_pca_values.append(pca_val) X_input_copy = X_input.copy() pca = PCA(n_components=pca_val, random_state=42) pca.fit(X_input_copy) X_pca = pca.transform(X_input_copy) x_train, x_test, y_train, y_test = train_test_split( X_pca, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() knn_classifier = KNeighborsClassifier(n_neighbors=15) knn_classifier.fit(x_train, y_train) pred = knn_classifier.predict(x_test) pca_acc = metrics.accuracy_score(y_test, pred) * 100 pca_finish_time = time.time() - start_time knn_acc_values.append(pca_acc) knn_time_values.append(pca_finish_time) knn_time_values.append(no_pca_finish_time) knn_pca_values.append(len(X_input.columns)) knn_acc_values.append(no_pca_acc) plt.bar(knn_pca_values, knn_acc_values) plt.ylim([80, 100]) plt.ylabel("KNN model accuracy") plt.xlabel("PCA N=") plt.plot(knn_pca_values, knn_time_values) plt.ylabel("KNN model time complexity") plt.xlabel("PCA N=") # With KNN, the time complexity fluctuates with the increasing number of PCA components. When it comes to the accuracy, it increases until the third PCA step (n=60), reaching somewhere around 92.5% and starts to fluctuate between 90% and 92.5% after that point. # # Logistic Regression lr_time_values = [] lr_pca_values = [] lr_acc_values = [] x_train, x_test, y_train, y_test = train_test_split( X_input, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() lr_classifier = LogisticRegression( C=0.1, penalty="l2", solver="newton-cg", max_iter=1000 ) lr_classifier.fit(x_train, y_train) pred = lr_classifier.predict(x_test) no_pca_acc = metrics.accuracy_score(y_test, pred) * 100 no_pca_finish_time = time.time() - start_time print("None PCA accuracy: %.2f" % (no_pca_acc), "%") print("Elapsed time: ", no_pca_finish_time) pca_start = 10 for pca_val in range(pca_start, len(X_input.columns), len(X_input.columns) // 10): lr_pca_values.append(pca_val) X_input_copy = X_input.copy() pca = PCA(n_components=pca_val, random_state=42) pca.fit(X_input_copy) X_pca = pca.transform(X_input_copy) x_train, x_test, y_train, y_test = train_test_split( X_pca, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() lr_classifier = LogisticRegression( C=0.1, penalty="l2", solver="newton-cg", max_iter=1000 ) lr_classifier.fit(x_train, y_train) pred = lr_classifier.predict(x_test) pca_acc = metrics.accuracy_score(y_test, pred) * 100 pca_finish_time = time.time() - start_time lr_acc_values.append(pca_acc) lr_time_values.append(pca_finish_time) lr_time_values.append(no_pca_finish_time) lr_pca_values.append(len(X_input.columns)) lr_acc_values.append(no_pca_acc) plt.bar(lr_pca_values, lr_acc_values) plt.ylim([80, 100]) plt.ylabel("LogReg model accuracy") plt.xlabel("PCA N=") plt.plot(lr_pca_values, lr_time_values) plt.ylabel("LogReg model time complexity") plt.xlabel("PCA N=")
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/300/129300717.ipynb
semeion
ibrahimalizade
[{"Id": 129300717, "ScriptId": 38400407, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 14337270, "CreationDate": "05/12/2023 15:07:18", "VersionNumber": 1.0, "Title": "Assignment 2 - Task 2", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 253.0, "LinesInsertedFromPrevious": 253.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
[{"Id": 185218637, "KernelVersionId": 129300717, "SourceDatasetVersionId": 3678171}]
[{"Id": 3678171, "DatasetId": 2201260, "DatasourceVersionId": 3732300, "CreatorUserId": 3032646, "LicenseName": "Unknown", "CreationDate": "05/22/2022 13:51:21", "VersionNumber": 1.0, "Title": "semeion", "Slug": "semeion", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 2201260, "CreatorUserId": 3032646, "OwnerUserId": 3032646.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 3678171.0, "CurrentDatasourceVersionId": 3732300.0, "ForumId": 2227349, "Type": 2, "CreationDate": "05/22/2022 13:51:21", "LastActivityDate": "05/22/2022", "TotalViews": 205, "TotalDownloads": 14, "TotalVotes": 1, "TotalKernels": 2}]
[{"Id": 3032646, "UserName": "ibrahimalizade", "DisplayName": "Ibrahim Orucoglu", "RegisterDate": "04/03/2019", "PerformanceTier": 0}]
# The **Semion Dataset** was chosen for this task since it contains 266 features. # The Semeion dataset is a well-known dataset in the field of pattern recognition and machine learning. It is often used for image classification tasks and handwritten digit recognition. The dataset was created by the Semeion Research Center, an Italian research institute, and was released in 1991. # The Semeion dataset consists of handwritten digits from 0 to 9, written by approximately 80 different individuals. Each digit is represented as a 16x16 grayscale image, resulting in a total of 256 pixels per image. These images have been pre-processed and digitized into a binary format. # The other 10 attributes show which digit each row belongs to (from 0 to 9). # # Importing necessary libraries import numpy as np import pandas as pd from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split from sklearn import svm from sklearn.metrics import confusion_matrix from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn import metrics from matplotlib import pyplot as plt import time semeion = pd.read_csv("../input/semeion/semeion.data", delimiter=" ", header=None) semeion.head() # We oberve that column 266 contains NaN values, therefore we drop it semeion.drop(266, axis=1, inplace=True) semeion.head() # Now, we want to separate label information from the actual data. We see that labelling starts from index 256 and whichever digit the row data belongs to indicated by 1, others being set to 0. We create a label map regarding that information and add them into new column named **label** label_map = { 256: "0", 257: "1", 258: "2", 259: "3", 260: "4", 261: "5", 262: "6", 263: "7", 264: "8", 265: "9", } semeion["label"] = semeion.apply(lambda row: label_map[row[256:266].idxmax()], axis=1) semeion # Then, we separate our data into X values (X_input) and Y values (Y_label) X_input = semeion.iloc[:, 0:256] Y_label = semeion.iloc[:, -1] print(len(X_input)) print(len(Y_label)) # From this point, the dataset will be tested on 3 classifiers: # * **SVM** # * **KNN** # * **Logistic Regression** # All these three classifiers will be tested with PCA with several number of components values with a step of 10, starting with value 10. # # SVM svm_time_values = [] svm_pca_values = [] svm_acc_values = [] x_train, x_test, y_train, y_test = train_test_split( X_input, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() svm_classifier = svm.SVC() svm_classifier.fit(x_train, y_train) pred = svm_classifier.predict(x_test) no_pca_acc = metrics.accuracy_score(y_test, pred) * 100 no_pca_finish_time = time.time() - start_time print("None PCA accuracy: %.2f" % (no_pca_acc), "%") print("Elapsed time: ", no_pca_finish_time) pca_start = 10 for pca_val in range(pca_start, len(X_input.columns), len(X_input.columns) // 10): svm_pca_values.append(pca_val) X_input_copy = X_input.copy() pca = PCA(n_components=pca_val, random_state=42) pca.fit(X_input_copy) X_pca = pca.transform(X_input_copy) x_train, x_test, y_train, y_test = train_test_split( X_pca, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() svm_classifier = svm.SVC() svm_classifier.fit(x_train, y_train) pred = svm_classifier.predict(x_test) pca_acc = metrics.accuracy_score(y_test, pred) * 100 pca_finish_time = time.time() - start_time svm_acc_values.append(pca_acc) svm_time_values.append(pca_finish_time) svm_time_values.append(no_pca_finish_time) svm_pca_values.append(len(X_input.columns)) svm_acc_values.append(no_pca_acc) plt.bar(svm_pca_values, svm_acc_values) plt.ylim([80, 100]) plt.ylabel("SVM model accuracy") plt.xlabel("PCA N=") plt.plot(svm_pca_values, svm_time_values) plt.ylabel("SVM model time complexity") plt.xlabel("PCA N=") # With SVM, it is observed that as the number of PCA components increases, time complexity also goes up. The last value is without using any PCA. # Regarding the accuracy, it increases up to second PCA step (n=35) and after that, the accuracy value fluctuates between 95% and 97.5%. # # KNN knn_time_values = [] knn_pca_values = [] knn_acc_values = [] x_train, x_test, y_train, y_test = train_test_split( X_input, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() knn_classifier = KNeighborsClassifier(n_neighbors=15) knn_classifier.fit(x_train, y_train) pred = knn_classifier.predict(x_test) no_pca_acc = metrics.accuracy_score(y_test, pred) * 100 no_pca_finish_time = time.time() - start_time print("None PCA accuracy: %.2f" % (no_pca_acc), "%") print("Elapsed time: ", no_pca_finish_time) pca_start = 10 for pca_val in range(pca_start, len(X_input.columns), len(X_input.columns) // 10): knn_pca_values.append(pca_val) X_input_copy = X_input.copy() pca = PCA(n_components=pca_val, random_state=42) pca.fit(X_input_copy) X_pca = pca.transform(X_input_copy) x_train, x_test, y_train, y_test = train_test_split( X_pca, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() knn_classifier = KNeighborsClassifier(n_neighbors=15) knn_classifier.fit(x_train, y_train) pred = knn_classifier.predict(x_test) pca_acc = metrics.accuracy_score(y_test, pred) * 100 pca_finish_time = time.time() - start_time knn_acc_values.append(pca_acc) knn_time_values.append(pca_finish_time) knn_time_values.append(no_pca_finish_time) knn_pca_values.append(len(X_input.columns)) knn_acc_values.append(no_pca_acc) plt.bar(knn_pca_values, knn_acc_values) plt.ylim([80, 100]) plt.ylabel("KNN model accuracy") plt.xlabel("PCA N=") plt.plot(knn_pca_values, knn_time_values) plt.ylabel("KNN model time complexity") plt.xlabel("PCA N=") # With KNN, the time complexity fluctuates with the increasing number of PCA components. When it comes to the accuracy, it increases until the third PCA step (n=60), reaching somewhere around 92.5% and starts to fluctuate between 90% and 92.5% after that point. # # Logistic Regression lr_time_values = [] lr_pca_values = [] lr_acc_values = [] x_train, x_test, y_train, y_test = train_test_split( X_input, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() lr_classifier = LogisticRegression( C=0.1, penalty="l2", solver="newton-cg", max_iter=1000 ) lr_classifier.fit(x_train, y_train) pred = lr_classifier.predict(x_test) no_pca_acc = metrics.accuracy_score(y_test, pred) * 100 no_pca_finish_time = time.time() - start_time print("None PCA accuracy: %.2f" % (no_pca_acc), "%") print("Elapsed time: ", no_pca_finish_time) pca_start = 10 for pca_val in range(pca_start, len(X_input.columns), len(X_input.columns) // 10): lr_pca_values.append(pca_val) X_input_copy = X_input.copy() pca = PCA(n_components=pca_val, random_state=42) pca.fit(X_input_copy) X_pca = pca.transform(X_input_copy) x_train, x_test, y_train, y_test = train_test_split( X_pca, Y_label, test_size=0.2, random_state=42 ) start_time = time.time() lr_classifier = LogisticRegression( C=0.1, penalty="l2", solver="newton-cg", max_iter=1000 ) lr_classifier.fit(x_train, y_train) pred = lr_classifier.predict(x_test) pca_acc = metrics.accuracy_score(y_test, pred) * 100 pca_finish_time = time.time() - start_time lr_acc_values.append(pca_acc) lr_time_values.append(pca_finish_time) lr_time_values.append(no_pca_finish_time) lr_pca_values.append(len(X_input.columns)) lr_acc_values.append(no_pca_acc) plt.bar(lr_pca_values, lr_acc_values) plt.ylim([80, 100]) plt.ylabel("LogReg model accuracy") plt.xlabel("PCA N=") plt.plot(lr_pca_values, lr_time_values) plt.ylabel("LogReg model time complexity") plt.xlabel("PCA N=")
false
0
2,766
0
2,785
2,766
129294630
# # Kagle Intro # * Please register on kaggle. Use your Levi9 email. If already registered, you can use your account, but identify yourself for org team # * **Important:** confirm your phone number. Otherwise you will not have access to some features like Internet access from Notebook or GPU # * You have 30 free GPU hours per week per user. Quota available in your profile. # ## Notebook configuration # * Enabling internet access # * Enabling GPU # ## Istalling additional libraries # ## Working with the data # * competition data # * adding own dataset import torchvision.datasets as dset path2data = "/kaggle/input/levi9-hack9-2023/train" path2json = "/kaggle/input/levi9-hack9-2023/train.json" coco_train = dset.CocoDetection( root=path2data, annFile=path2json, transform=transforms.ToTensor() ) print("Number of samples: ", len(coco_train)) img, target = coco_train[0] print(img.size) print(target)
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/294/129294630.ipynb
null
null
[{"Id": 129294630, "ScriptId": 38433042, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 1904452, "CreationDate": "05/12/2023 14:14:56", "VersionNumber": 1.0, "Title": "Kaggle intro", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 31.0, "LinesInsertedFromPrevious": 31.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 0.0, "LinesInsertedFromFork": NaN, "LinesDeletedFromFork": NaN, "LinesChangedFromFork": NaN, "LinesUnchangedFromFork": NaN, "TotalVotes": 0}]
null
null
null
null
# # Kagle Intro # * Please register on kaggle. Use your Levi9 email. If already registered, you can use your account, but identify yourself for org team # * **Important:** confirm your phone number. Otherwise you will not have access to some features like Internet access from Notebook or GPU # * You have 30 free GPU hours per week per user. Quota available in your profile. # ## Notebook configuration # * Enabling internet access # * Enabling GPU # ## Istalling additional libraries # ## Working with the data # * competition data # * adding own dataset import torchvision.datasets as dset path2data = "/kaggle/input/levi9-hack9-2023/train" path2json = "/kaggle/input/levi9-hack9-2023/train.json" coco_train = dset.CocoDetection( root=path2data, annFile=path2json, transform=transforms.ToTensor() ) print("Number of samples: ", len(coco_train)) img, target = coco_train[0] print(img.size) print(target)
false
0
266
0
266
266
129294338
<jupyter_start><jupyter_text>Supervised-Learning Kaggle dataset identifier: supervisedlearning <jupyter_script># # # **Content** # ## Regression # * [Linear Regression](#1.) # * [Multiple Linear Regression](#2.) # * [Polynomial Linear Regression](#3.) # * [Support Vector Regression](#4.) # * [Decision Tree Regression](#5.) # * [Random Forest Regression](#6.) # ## Classification # * [K-Nearest Neighbour (KNN) Classification](#7.) # * [Support Vector Machine (SVM) Classification](#8.) # * [Naive Bayes Classification](#9.) # * [Decision Tree Classification](#10.) # * [Random Forest Classification](#11.) # * [Conclusion](#12.) # # Regression # # # Linear Regression # import pandas as pd import matplotlib.pyplot as plt import math data = pd.read_csv("/kaggle/input/supervised/auto.csv") data.head() data.info() plt.scatter(data.mpg, data.displ) plt.xlabel("mpg") plt.ylabel("displ") plt.show() # %% linear regression # sklearn library from sklearn.linear_model import LinearRegression # linear regression model linear_reg = LinearRegression() x = data.mpg.values.reshape(-1, 1) y = data.displ.values.reshape(-1, 1) linear_reg.fit(x, y) print("R sq: ", linear_reg.score(x, y)) print("Correlation: ", math.sqrt(linear_reg.score(x, y))) # %% prediction import numpy as np print("Coefficient for X: ", linear_reg.coef_) print("Intercept for X: ", linear_reg.intercept_) print( "Regression line is: y = " + str(linear_reg.intercept_[0]) + " + (x * " + str(linear_reg.coef_[0][0]) + ")" ) # mpg = 1663 + 1138*displ mpg_new = 1663 + 1138 * 11 print(mpg_new) array = np.array([11]).reshape(-1, 1) print(linear_reg.predict(array)) # visualize line array = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]).reshape( -1, 1 ) # plt.scatter(x, y) # plt.show() y_head = linear_reg.predict(array) # mpg plt.plot(array, y_head, color="yellow") array = np.array([100]).reshape(-1, 1) linear_reg.predict(array) y_head = linear_reg.predict(x) # maas from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head)) # # # # Multiple Linear Regression x = data.iloc[:, [0, 1]].values y = data.displ.values.reshape(-1, 1) multiple_linear_regression = LinearRegression() multiple_linear_regression.fit(x, y) print("b0: ", multiple_linear_regression.intercept_) print("b1: ", multiple_linear_regression.coef_) # predict x_ = np.array([[10, 35], [5, 35]]) multiple_linear_regression.predict(x_) y_head = multiple_linear_regression.predict(x) from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head)) # # # # Polynomial Linear Regression x = data["mpg"].values.reshape(-1, 1) y = data["displ"].values.reshape(-1, 1) plt.scatter(x, y) plt.xlabel("mpg") plt.ylabel("displ") plt.show() # polynomial regression = y = b0 + b1*x +b2*x^2 + b3*x^3 + ... + bn*x^n from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression polynominal_regression = PolynomialFeatures(degree=4) x_polynomial = polynominal_regression.fit_transform(x, y) # %% fit linear_regression = LinearRegression() linear_regression.fit(x_polynomial, y) # %% y_head2 = linear_regression.predict(x_polynomial) plt.plot(x, y_head2, color="yellow", label="poly") plt.legend() plt.scatter(x, y) plt.xlabel("mpgs") plt.ylabel("displ") plt.show() from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head2)) # # # # Support Vector Regression x = data["mpg"].values.reshape(-1, 1) y = data["displ"].values.reshape(-1, 1) plt.scatter(x, y) plt.xlabel("mpg") plt.ylabel("displ") plt.show() from sklearn.preprocessing import StandardScaler sc1 = StandardScaler() x_pred = sc1.fit_transform(x) sc2 = StandardScaler() y_pred = sc2.fit_transform(y) # %% SVR from sklearn.svm import SVR svr_reg = SVR(kernel="rbf") svr_reg.fit(x_pred, y_pred) y_head = svr_reg.predict(x_pred) # visualize line plt.plot(x_pred, y_head, color="green", label="SVR") plt.legend() plt.scatter(x_pred, y_pred, color="red") plt.show() print("R sq: ", svr_reg.score(x_pred, y_pred)) # # # # Decision Tree Regression x = data.iloc[:, [0]].values.reshape(-1, 1) y = data.iloc[:, [1]].values.reshape(-1, 1) # %% decision tree regression from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(x, y) print(tree_reg.predict(np.array([5.5]).reshape(-1, 1))) x_ = np.arange(min(x), max(x), 0.01).reshape(-1, 1) # print(x) y_head = tree_reg.predict(x_) # print(y_head) # %% visualize plt.scatter(x, y, color="pink") plt.plot(x_, y_head, color="blue") plt.xlabel("mpg ") plt.ylabel("displ") plt.show() y_head = tree_reg.predict(x) # from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head)) # # # # Random Forest Regression x = data.iloc[:, 0].values.reshape(-1, 1) y = data.iloc[:, 1].values.reshape(-1, 1) from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor(n_estimators=100, random_state=42) rf.fit(x, y) print("score: ", rf.predict(np.array([7.8]).reshape(-1, 1))) x_ = np.arange(min(x), max(x), 0.01).reshape(-1, 1) y_head = rf.predict(x_) # visualize plt.scatter(x, y, color="orange") plt.plot(x_, y_head, color="blue") plt.xlabel("mpg") plt.ylabel("displ") plt.show() y_head = rf.predict(x) from sklearn.metrics import r2_score print("r_score: ", r2_score(y, y_head)) # # Classification # # # K-Nearest Neighbour (KNN) Classification data.tail() data2 = pd.read_csv("/kaggle/input/supervised/auto.csv") A = data2[data2.origin == "Asia"] US = data2[data2.origin == "US"] # scatter plot plt.scatter(A.displ, A.mpg, color="orange", label="Asia", alpha=0.3) plt.scatter(US.displ, US.mpg, color="blue", label="US", alpha=0.3) plt.xlabel("displ") plt.ylabel("mpg") plt.legend() plt.show() # %% data2.origin = [1 if each == "Asia" else 0 for each in data2.origin] y = data2.origin.values x_data = data2.drop(["origin"], axis=1) # %% # normalization x = (x_data - np.min(x_data)) / (np.max(x_data) - np.min(x_data)) # %% # train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1) # %% # knn model from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=3) # n_neighbors = k knn.fit(x_train, y_train) prediction = knn.predict(x_test) print(" {}NN score: {} ".format(3, knn.score(x_test, y_test))) # %% # find k value score_list = [] for each in range(1, 15): knn2 = KNeighborsClassifier(n_neighbors=each) knn2.fit(x_train, y_train) score_list.append(knn2.score(x_test, y_test)) plt.plot(range(1, 15), score_list) plt.xlabel("k values") plt.ylabel("accuracy") plt.show() # %% # knn model knn = KNeighborsClassifier(n_neighbors=8) # n_neighbors = k knn.fit(x_train, y_train) prediction = knn.predict(x_test) print(" {} nn score: {} ".format(3, knn.score(x_test, y_test))) # %% confusion matrix y_pred = knn.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Support Vector Machine (SVM) Classification # %% # train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1) # %% SVM from sklearn.svm import SVC svm = SVC(random_state=1) svm.fit(x_train, y_train) # %% test print("print accuracy of svm algorithim: ", svm.score(x_test, y_test)) # %% confusion matrix y_pred = svm.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Naive Bayes Classification # %% # train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1) # %% Naive bayes from sklearn.naive_bayes import GaussianNB nb = GaussianNB() nb.fit(x_train, y_train) nb.score(x_test, y_test) # %% test print("print accuracy of naive bayes algorithim: ", nb.score(x_test, y_test)) # %% confusion matrix y_pred = nb.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Decision Tree Classification # %% train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.15, random_state=42 ) # %% from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier() dt.fit(x_train, y_train) print("score: ", dt.score(x_test, y_test)) # %% confusion matrix y_pred = dt.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Random Forest Classification # %% train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.15, random_state=42 ) # %% random forest from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators=100, random_state=1) rf.fit(x_train, y_train) print("random forest algorithim result: ", rf.score(x_test, y_test)) # %% confusion matrix y_pred = rf.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show()
/fsx/loubna/kaggle_data/kaggle-code-data/data/0129/294/129294338.ipynb
supervisedlearning
ameythakur20
[{"Id": 129294338, "ScriptId": 38436816, "ParentScriptVersionId": NaN, "ScriptLanguageId": 9, "AuthorUserId": 13957731, "CreationDate": "05/12/2023 14:12:19", "VersionNumber": 1.0, "Title": "SUPERVISED LEARNING(2)_week10", "EvaluationDate": "05/12/2023", "IsChange": true, "TotalLines": 471.0, "LinesInsertedFromPrevious": 99.0, "LinesChangedFromPrevious": 0.0, "LinesUnchangedFromPrevious": 372.0, "LinesInsertedFromFork": 99.0, "LinesDeletedFromFork": 605.0, "LinesChangedFromFork": 0.0, "LinesUnchangedFromFork": 372.0, "TotalVotes": 0}]
[{"Id": 185207029, "KernelVersionId": 129294338, "SourceDatasetVersionId": 2396748}, {"Id": 185207030, "KernelVersionId": 129294338, "SourceDatasetVersionId": 2970370}]
[{"Id": 2396748, "DatasetId": 1449170, "DatasourceVersionId": 2438759, "CreatorUserId": 7838819, "LicenseName": "Unknown", "CreationDate": "07/05/2021 10:39:05", "VersionNumber": 1.0, "Title": "Supervised-Learning", "Slug": "supervisedlearning", "Subtitle": NaN, "Description": NaN, "VersionNotes": "Initial release", "TotalCompressedBytes": 0.0, "TotalUncompressedBytes": 0.0}]
[{"Id": 1449170, "CreatorUserId": 7838819, "OwnerUserId": 7838819.0, "OwnerOrganizationId": NaN, "CurrentDatasetVersionId": 2396748.0, "CurrentDatasourceVersionId": 2438759.0, "ForumId": 1468713, "Type": 2, "CreationDate": "07/05/2021 10:39:05", "LastActivityDate": "07/05/2021", "TotalViews": 1661, "TotalDownloads": 54, "TotalVotes": 3, "TotalKernels": 2}]
[{"Id": 7838819, "UserName": "ameythakur20", "DisplayName": "AMEY THAKUR", "RegisterDate": "07/05/2021", "PerformanceTier": 0}]
# # # **Content** # ## Regression # * [Linear Regression](#1.) # * [Multiple Linear Regression](#2.) # * [Polynomial Linear Regression](#3.) # * [Support Vector Regression](#4.) # * [Decision Tree Regression](#5.) # * [Random Forest Regression](#6.) # ## Classification # * [K-Nearest Neighbour (KNN) Classification](#7.) # * [Support Vector Machine (SVM) Classification](#8.) # * [Naive Bayes Classification](#9.) # * [Decision Tree Classification](#10.) # * [Random Forest Classification](#11.) # * [Conclusion](#12.) # # Regression # # # Linear Regression # import pandas as pd import matplotlib.pyplot as plt import math data = pd.read_csv("/kaggle/input/supervised/auto.csv") data.head() data.info() plt.scatter(data.mpg, data.displ) plt.xlabel("mpg") plt.ylabel("displ") plt.show() # %% linear regression # sklearn library from sklearn.linear_model import LinearRegression # linear regression model linear_reg = LinearRegression() x = data.mpg.values.reshape(-1, 1) y = data.displ.values.reshape(-1, 1) linear_reg.fit(x, y) print("R sq: ", linear_reg.score(x, y)) print("Correlation: ", math.sqrt(linear_reg.score(x, y))) # %% prediction import numpy as np print("Coefficient for X: ", linear_reg.coef_) print("Intercept for X: ", linear_reg.intercept_) print( "Regression line is: y = " + str(linear_reg.intercept_[0]) + " + (x * " + str(linear_reg.coef_[0][0]) + ")" ) # mpg = 1663 + 1138*displ mpg_new = 1663 + 1138 * 11 print(mpg_new) array = np.array([11]).reshape(-1, 1) print(linear_reg.predict(array)) # visualize line array = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]).reshape( -1, 1 ) # plt.scatter(x, y) # plt.show() y_head = linear_reg.predict(array) # mpg plt.plot(array, y_head, color="yellow") array = np.array([100]).reshape(-1, 1) linear_reg.predict(array) y_head = linear_reg.predict(x) # maas from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head)) # # # # Multiple Linear Regression x = data.iloc[:, [0, 1]].values y = data.displ.values.reshape(-1, 1) multiple_linear_regression = LinearRegression() multiple_linear_regression.fit(x, y) print("b0: ", multiple_linear_regression.intercept_) print("b1: ", multiple_linear_regression.coef_) # predict x_ = np.array([[10, 35], [5, 35]]) multiple_linear_regression.predict(x_) y_head = multiple_linear_regression.predict(x) from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head)) # # # # Polynomial Linear Regression x = data["mpg"].values.reshape(-1, 1) y = data["displ"].values.reshape(-1, 1) plt.scatter(x, y) plt.xlabel("mpg") plt.ylabel("displ") plt.show() # polynomial regression = y = b0 + b1*x +b2*x^2 + b3*x^3 + ... + bn*x^n from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression polynominal_regression = PolynomialFeatures(degree=4) x_polynomial = polynominal_regression.fit_transform(x, y) # %% fit linear_regression = LinearRegression() linear_regression.fit(x_polynomial, y) # %% y_head2 = linear_regression.predict(x_polynomial) plt.plot(x, y_head2, color="yellow", label="poly") plt.legend() plt.scatter(x, y) plt.xlabel("mpgs") plt.ylabel("displ") plt.show() from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head2)) # # # # Support Vector Regression x = data["mpg"].values.reshape(-1, 1) y = data["displ"].values.reshape(-1, 1) plt.scatter(x, y) plt.xlabel("mpg") plt.ylabel("displ") plt.show() from sklearn.preprocessing import StandardScaler sc1 = StandardScaler() x_pred = sc1.fit_transform(x) sc2 = StandardScaler() y_pred = sc2.fit_transform(y) # %% SVR from sklearn.svm import SVR svr_reg = SVR(kernel="rbf") svr_reg.fit(x_pred, y_pred) y_head = svr_reg.predict(x_pred) # visualize line plt.plot(x_pred, y_head, color="green", label="SVR") plt.legend() plt.scatter(x_pred, y_pred, color="red") plt.show() print("R sq: ", svr_reg.score(x_pred, y_pred)) # # # # Decision Tree Regression x = data.iloc[:, [0]].values.reshape(-1, 1) y = data.iloc[:, [1]].values.reshape(-1, 1) # %% decision tree regression from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(x, y) print(tree_reg.predict(np.array([5.5]).reshape(-1, 1))) x_ = np.arange(min(x), max(x), 0.01).reshape(-1, 1) # print(x) y_head = tree_reg.predict(x_) # print(y_head) # %% visualize plt.scatter(x, y, color="pink") plt.plot(x_, y_head, color="blue") plt.xlabel("mpg ") plt.ylabel("displ") plt.show() y_head = tree_reg.predict(x) # from sklearn.metrics import r2_score print("r_square score: ", r2_score(y, y_head)) # # # # Random Forest Regression x = data.iloc[:, 0].values.reshape(-1, 1) y = data.iloc[:, 1].values.reshape(-1, 1) from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor(n_estimators=100, random_state=42) rf.fit(x, y) print("score: ", rf.predict(np.array([7.8]).reshape(-1, 1))) x_ = np.arange(min(x), max(x), 0.01).reshape(-1, 1) y_head = rf.predict(x_) # visualize plt.scatter(x, y, color="orange") plt.plot(x_, y_head, color="blue") plt.xlabel("mpg") plt.ylabel("displ") plt.show() y_head = rf.predict(x) from sklearn.metrics import r2_score print("r_score: ", r2_score(y, y_head)) # # Classification # # # K-Nearest Neighbour (KNN) Classification data.tail() data2 = pd.read_csv("/kaggle/input/supervised/auto.csv") A = data2[data2.origin == "Asia"] US = data2[data2.origin == "US"] # scatter plot plt.scatter(A.displ, A.mpg, color="orange", label="Asia", alpha=0.3) plt.scatter(US.displ, US.mpg, color="blue", label="US", alpha=0.3) plt.xlabel("displ") plt.ylabel("mpg") plt.legend() plt.show() # %% data2.origin = [1 if each == "Asia" else 0 for each in data2.origin] y = data2.origin.values x_data = data2.drop(["origin"], axis=1) # %% # normalization x = (x_data - np.min(x_data)) / (np.max(x_data) - np.min(x_data)) # %% # train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1) # %% # knn model from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=3) # n_neighbors = k knn.fit(x_train, y_train) prediction = knn.predict(x_test) print(" {}NN score: {} ".format(3, knn.score(x_test, y_test))) # %% # find k value score_list = [] for each in range(1, 15): knn2 = KNeighborsClassifier(n_neighbors=each) knn2.fit(x_train, y_train) score_list.append(knn2.score(x_test, y_test)) plt.plot(range(1, 15), score_list) plt.xlabel("k values") plt.ylabel("accuracy") plt.show() # %% # knn model knn = KNeighborsClassifier(n_neighbors=8) # n_neighbors = k knn.fit(x_train, y_train) prediction = knn.predict(x_test) print(" {} nn score: {} ".format(3, knn.score(x_test, y_test))) # %% confusion matrix y_pred = knn.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Support Vector Machine (SVM) Classification # %% # train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1) # %% SVM from sklearn.svm import SVC svm = SVC(random_state=1) svm.fit(x_train, y_train) # %% test print("print accuracy of svm algorithim: ", svm.score(x_test, y_test)) # %% confusion matrix y_pred = svm.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Naive Bayes Classification # %% # train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1) # %% Naive bayes from sklearn.naive_bayes import GaussianNB nb = GaussianNB() nb.fit(x_train, y_train) nb.score(x_test, y_test) # %% test print("print accuracy of naive bayes algorithim: ", nb.score(x_test, y_test)) # %% confusion matrix y_pred = nb.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Decision Tree Classification # %% train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.15, random_state=42 ) # %% from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier() dt.fit(x_train, y_train) print("score: ", dt.score(x_test, y_test)) # %% confusion matrix y_pred = dt.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show() # # # # Random Forest Classification # %% train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.15, random_state=42 ) # %% random forest from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators=100, random_state=1) rf.fit(x_train, y_train) print("random forest algorithim result: ", rf.score(x_test, y_test)) # %% confusion matrix y_pred = rf.predict(x_test) y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred) # %% cm visualization import seaborn as sns f, ax = plt.subplots(figsize=(5, 5)) sns.heatmap(cm, annot=True, linewidths=0.5, linecolor="red", fmt=".0f", ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.show()
false
1
3,823
0
3,843
3,823